Kubeadm: kubelet tidak akan restart setelah reboot - Tidak dapat mendaftarkan node dengan server API: koneksi ditolak

Dibuat pada 27 Jul 2018  ·  60Komentar  ·  Sumber: kubernetes/kubeadm

Terima kasih telah mengajukan masalah! Sebelum menekan tombol, harap jawab pertanyaan-pertanyaan ini.

Apakah ini permintaan bantuan?

Tapi saya telah mencari StackOverflow dan googling berkali-kali tanpa menemukan masalah. Juga, ini tampaknya memengaruhi lebih banyak orang.

Kata kunci apa yang Anda cari di masalah kubeadm sebelum mengajukan yang ini?

Pesan kesalahan yang saya lihat di journalctl

Apakah ini LAPORAN BUG atau PERMINTAAN FITUR?

Laporan bug

Versi

versi kubeadm : kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Lingkungan :

  • Versi Kubernetes : Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
    Server juga 1,11 tetapi karena tidak dimulai saat ini, kubectl version tidak akan menampilkannya
  • Penyedia cloud atau konfigurasi perangkat keras : Perangkat keras lokal (dihosting sendiri)
  • OS :
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
  • Kernel : Linux kubernetes 3.10.0-862.9.1.el7.x86_64 #1 SMP Mon Jul 16 16:29:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Apa yang terjadi?

Layanan Kubelet tidak dimulai

Apa yang Anda harapkan terjadi?

Layanan Kubelet harus dimulai

Bagaimana cara memperbanyaknya (seminimal dan setepat mungkin)?

  • Kubeadm digunakan untuk men-deploy kubernetes
  • Menerapkan banyak layanan dan dapat mengonfirmasi bahwa semuanya berfungsi dengan baik
  • Di-boot ulang
  • Layanan Kubelet tidak lagi dimulai

Ada hal lain yang perlu kami ketahui?

Log Journalctl:

Jul 27 14:46:17 kubernetes systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Jul 27 14:46:17 kubernetes kubelet[1619]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more informatio
n.
Jul 27 14:46:17 kubernetes kubelet[1619]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.608612    1619 server.go:408] Version: v1.11.1
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.609679    1619 plugins.go:97] No cloud provider specified.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.613651    1619 certificate_store.go:131] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.709720    1619 server.go:648] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710299    1619 container_manager_linux.go:243] container manager verified user specified cgroup-root exists: []
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710322    1619 container_manager_linux.go:248] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710457    1619 container_manager_linux.go:267] Creating device plugin manager: true
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710515    1619 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710600    1619 state_mem.go:84] [cpumanager] updated default cpuset: ""
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710617    1619 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710751    1619 kubelet.go:274] Adding pod path: /etc/kubernetes/manifests
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710814    1619 kubelet.go:299] Watching apiserver
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.711655    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.711661    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.711752    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.717242    1619 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.717277    1619 client.go:104] Start docker client with request timeout=2m0s
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.718726    1619 docker_service.go:545] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.718756    1619 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.721656    1619 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.721975    1619 docker_service.go:253] Docker cri networking managed by cni
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.733083    1619 docker_service.go:258] Docker Info: &{ID:V36L:ETJO:IECX:PJF4:G3GB:JHA6:LGCF:VQBJ:D2GY:PVFO:567O:545Y Containers:66 ContainersRunning:0 ContainersPaused:0 ContainersStopped:66 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:15 OomKillDisable:true NGoroutines:22 SystemTime:2018-07-27T14:46:17.727178862+02:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-862.9.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc420ebd110 NCPU:12 MemTotal:33386934272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]} docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc421016140} LiveRestoreEnabled:false Isolation: InitBinary:/usr/libexec/docker/docker-init-current ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:fec3683b971d9c3ef73f284f176672c44b448662 Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux]}
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.733181    1619 docker_service.go:271] Setting cgroupDriver to systemd
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.825381    1619 kuberuntime_manager.go:186] Container runtime docker initialized, version: 1.13.1, apiVersion: 1.26.0
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.839306    1619 csi_plugin.go:111] kubernetes.io/csi: plugin initializing...
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.840955    1619 server.go:129] Starting to listen on 0.0.0.0:10250
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841036    1619 server.go:986] Started kubelet
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841423    1619 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841448    1619 status_manager.go:152] Starting to sync pod status with apiserver
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841462    1619 kubelet.go:1758] Starting kubelet main sync loop.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841479    1619 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841710    1619 volume_manager.go:247] Starting Kubelet Volume Manager
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841754    1619 desired_state_of_world_populator.go:130] Desired state populator starts to run
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.842653    1619 server.go:302] Adding debug handlers to kubelet server.
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.868316    1619 kubelet.go:1261] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.872508    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-hostnamed.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.872925    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-journal-flush.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.873312    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-logind.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.873703    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-remount-fs.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.874064    1619 container.go:393] Failed to create summary reader for "/system.slice/rsyslog.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.874452    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-readahead-collect.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.874765    1619 container.go:393] Failed to create summary reader for "/system.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.875097    1619 container.go:393] Failed to create summary reader for "/system.slice/kmod-static-nodes.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.875392    1619 container.go:393] Failed to create summary reader for "/system.slice/irqbalance.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.875679    1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-dmesg.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876007    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-readahead-replay.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876289    1619 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876567    1619 container.go:393] Failed to create summary reader for "/system.slice/auditd.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876913    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-udev-trigger.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.877200    1619 container.go:393] Failed to create summary reader for "/system.slice/kubelet.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.877503    1619 container.go:393] Failed to create summary reader for "/system.slice/network.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.877792    1619 container.go:393] Failed to create summary reader for "/system.slice/system-getty.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.878118    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-journald.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.878486    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-user-sessions.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.878912    1619 container.go:393] Failed to create summary reader for "/system.slice/polkit.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.879312    1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-domainname.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.879802    1619 container.go:393] Failed to create summary reader for "/system.slice/lvm2-monitor.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.880172    1619 container.go:393] Failed to create summary reader for "/system.slice/tuned.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.880491    1619 container.go:393] Failed to create summary reader for "/system.slice/dbus.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.880788    1619 container.go:393] Failed to create summary reader for "/system.slice/docker.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.881112    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-udevd.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.881402    1619 container.go:393] Failed to create summary reader for "/system.slice/kdump.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.881710    1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-import-state.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.882166    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-random-seed.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.882509    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-tmpfiles-setup-dev.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.882806    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-tmpfiles-setup.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.883115    1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-readonly.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.883420    1619 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager-dispatcher.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.883704    1619 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager-wait-online.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884005    1619 container.go:393] Failed to create summary reader for "/system.slice/crond.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884329    1619 container.go:393] Failed to create summary reader for "/system.slice/system-selinux\\x2dpolicy\\x2dmigrate\\x2dlocal\\x2dchanges.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884617    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-sysctl.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884907    1619 container.go:393] Failed to create summary reader for "/system.slice/k8s-self-hosted-recover.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.885213    1619 container.go:393] Failed to create summary reader for "/system.slice/lvm2-lvmetad.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.885466    1619 container.go:393] Failed to create summary reader for "/user.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.885730    1619 container.go:393] Failed to create summary reader for "/system.slice/sshd.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.886098    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-update-utmp.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.886384    1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-vconsole-setup.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.913789    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.917905    1619 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.917923    1619 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.917935    1619 policy_none.go:42] [cpumanager] none policy: Start
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.926164    1619 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.932356    1619 container.go:393] Failed to create summary reader for "/libcontainer_1619_systemd_test_default.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941592    1619 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941762    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.944471    1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.944714    1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: Starting Device Plugin manager
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.986308    1619 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986668    1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986680    1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986749    1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 1619
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986755    1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 1619
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.144855    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.148528    1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.148933    1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.158503    1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-mon0-4txgr_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.300596    1619 pod_container_deletor.go:75] Container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.323729    1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-osd-id-0-54d59fc64b-c5tw4_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.516802    1619 pod_container_deletor.go:75] Container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.549067    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.552841    1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.553299    1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.674143    1619 pod_container_deletor.go:75] Container "96b85439f089170cf6161f5410f8970de67f0609d469105dff4e3d5ec2d10351" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.712440    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.713284    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.714397    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:19 kubernetes kubelet[1619]: W0727 14:46:19.139032    1619 pod_container_deletor.go:75] Container "7b9757b85bc8ee4ce6ac954acf0bcd5c06b2ceb815aee802a8f53f9de18d967f" not found in pod's containers
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.932356    1619 container.go:393] Failed to create summary reader for "/libcontainer_1619_systemd_test_default.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941592    1619 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941762    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.944471    1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.944714    1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: Starting Device Plugin manager
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.986308    1619 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986668    1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986680    1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986749    1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 1619
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986755    1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 1619
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.144855    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.148528    1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.148933    1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.158503    1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-mon0-4txgr_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.300596    1619 pod_container_deletor.go:75] Container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.323729    1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-osd-id-0-54d59fc64b-c5tw4_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.516802    1619 pod_container_deletor.go:75] Container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.549067    1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.552841    1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.553299    1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.674143    1619 pod_container_deletor.go:75] Container "96b85439f089170cf6161f5410f8970de67f0609d469105dff4e3d5ec2d10351" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.712440    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.713284    1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused

Dan itu berlanjut dan satu tentang tidak bisa mendaftarkan kubernetes (itu nama host saya) dan gagal mendaftar sumber daya kubernetes.

Sejak awal, saya menerapkan skrip pemulihan yang dihosting sendiri (https://github.com/xetys/k8s-self-hosted-recovery) agar tidak terpengaruh oleh boot ulang. Berikut lognya:

Jul 27 14:46:09 kubernetes systemd[1]: Starting Recovers self-hosted k8s after reboot...
Jul 27 14:46:09 kubernetes k8s-self-hosted-recover[1001]: [k8s-self-hosted-recover] Restoring old plane...
Jul 27 14:46:12 kubernetes k8s-self-hosted-recover[1001]: [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
Jul 27 14:46:12 kubernetes k8s-self-hosted-recover[1001]: [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
Jul 27 14:46:12 kubernetes k8s-self-hosted-recover[1001]: [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
Jul 27 14:46:17 kubernetes k8s-self-hosted-recover[1001]: [k8s-self-hosted-recover] Waiting while the api server is back..

Saya kehabisan ide dan akan menyambut baik bantuan apa pun yang dapat Anda berikan.

areHA help wanted kinbug prioritawaiting-more-evidence sicluster-lifecycle

Komentar yang paling membantu

coba ini di setiap node @ PierrickI3

systemctl menghentikan kubelet
systemctl menghentikan buruh pelabuhan
iptables --flush
iptables -tnat --flush
systemctl mulai kubelet
systemctl mulai buruh pelabuhan

Semua 60 komentar

halo @ PierrickI3

apakah Anda memiliki konektivitas ke:

https://192.168.1.19:6443

Saya tidak, tapi itu alamat IP lokal saya:

[pierrick<strong i="6">@kubernetes</strong> ~]$ curl https://192.168.1.19:6443
curl: (7) Failed connect to 192.168.1.19:6443; Connection refused
[pierrick<strong i="7">@kubernetes</strong> ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fc:aa:14:9a:97:e4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.19/24 brd 192.168.1.255 scope global noprefixroute dynamic eno1
       valid_lft 86196sec preferred_lft 86196sec
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:4e:90:66:f7 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever

6443 adalah port aman dari server API.
apakah server API berjalan?

apa hasil dari kubectl get pods --all-namespaces

saya belum melihat kegagalan tepat setelah Watching apiserver ...

apakah Anda memiliki firewall yang memblokir 6443?
juga, dapatkah Anda membagikan manifes server API Anda? sembunyikan data sensitif apa pun, jika diperlukan.

Berikut adalah keluaran dari kubectl:

[pierrick<strong i="6">@kubernetes</strong> ~]$ kubectl get pods --all-namespaces
The connection to the server 192.168.1.19:6443 was refused - did you specify the right host or port?

Saya tidak memiliki firewall yang memblokir port ini dan saya langsung melakukan ssh di mesin.

Berikut manifesnya (tidak ada data sensitif untuk disembunyikan karena tidak terekspos):

apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --authorization-mode=Node,RBAC
    - --advertise-address=192.168.1.19
    - --allow-privileged=true
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --disable-admission-plugins=PersistentVolumeLabel
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: k8s.gcr.io/kube-apiserver-amd64:v1.11.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 192.168.1.19
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    volumeMounts:
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki

berikut adalah output dari docker ps:

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
0ab99c997dc4        272b3a60cd68           "kube-scheduler --..."   37 minutes ago      Up 37 minutes                           k8s_kube-scheduler_kube-scheduler-kubernetes_kube-system_537879acc30dd5eff5497cb2720a6d64_8
058ee146b06f        52096ee87d0e           "kube-controller-m..."   37 minutes ago      Up 37 minutes                           k8s_kube-controller-manager_kube-controller-manager-kubernetes_kube-system_0da157f6a48b9a49830c56c19af5c954_0
dca22f2e66c1        k8s.gcr.io/pause:3.1   "/pause"                 37 minutes ago      Up 37 minutes                           k8s_POD_kube-scheduler-kubernetes_kube-system_537879acc30dd5eff5497cb2720a6d64_8
301d6736b6ad        k8s.gcr.io/pause:3.1   "/pause"                 37 minutes ago      Up 37 minutes                           k8s_POD_kube-controller-manager-kubernetes_kube-system_0da157f6a48b9a49830c56c19af5c954_0
10cf027f301b        k8s.gcr.io/pause:3.1   "/pause"                 37 minutes ago      Up 37 minutes                           k8s_POD_kube-apiserver-kubernetes_kube-system_b784b670ba660d7fe4b0407690d68d81_4
c7c59c971636        k8s.gcr.io/pause:3.1   "/pause"                 37 minutes ago      Up 37 minutes                           k8s_POD_etcd-kubernetes_kube-system_6fd4d3c9fe373df920ce5e1e4572fd1d_8
--disable-admission-plugins=PersistentVolumeLabel

yang ini harus ditinggalkan dan dinonaktifkan secara default di 1.11.1, apakah Anda mengadaptasi konfigurasi kubeadm lama?
(edit: oh, tunggu kami tidak memilih ini: https://github.com/kubernetes/kubernetes/pull/65827)

apa isi konfigurasi kubeadm Anda (sekali lagi, harap sembunyikan data sensitif jika diperlukan)?

Bagaimana cara mengambil konfigurasi kubeadm? Menjalankan kubeadm config view menghasilkan kesalahan connection refused .

Untuk menyebarkan, saya mengikuti instruksi kubeadm. Inilah yang saya lakukan:

# update yum packages
yum update -y

# install git, wget & docker
yum install -y git wget nano go docker

# install CRI
rpm --import https://mirror.go-repo.io/centos/RPM-GPG-KEY-GO-REPO
curl -s https://mirror.go-repo.io/centos/go-repo.repo | tee /etc/yum.repos.d/go-repo.repo
yum update -y golang

# start Docker
systemctl enable docker && systemctl start docker

# disable swap (not supported by kubeadm)
swapoff -a

# add kubernetes repo to yum
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

setenforce 0 # required to allow containers to access the host filesystem (https://www.centos.org/docs/5/html/5.2/Deployment_Guide/sec-sel-enable-disable-enforcement.html). To disable permanently: https://www.tecmint.com/disable-selinux-temporarily-permanently-in-centos-rhel-fedora/

# disable firewall (I know, not great but I am fed up with opening ports and I am behind another firewall and I can do whatever I want)
systemctl disable firewalld && systemctl stop firewalld

###########
# KUBEADM #
###########

# install kubelet, kubeadm and kubectl
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

# prevent issuers with traffic being routed incorrectly due to iptables being bypassed
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

# install CRICTL (https://github.com/kubernetes-incubator/cri-tools), required by kubeadm
go get github.com/kubernetes-incubator/cri-tools/cmd/crictl

# deploy kubernetes
kubeadm init --pod-network-cidr=10.244.0.0/16

# allow kubectl for non sudoers (run this as a regular user)
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
echo 'export KUBECONFIG=$HOME/admin.conf' >> $HOME/.bashrc

# For the root user, run this:
export KUBECONFIG=/etc/kubernetes/admin.conf
echo 'KUBECONFIG=/etc/kubernetes/admin.conf' >> $HOME/.bashrc

# deploy pod network (flannel)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl taint nodes --all node-role.kubernetes.io/master- # allow pods to be scheduled on master

###################
# REBOOTING ISSUE #
###################
# At the time of writing this, rebooting causes kubernetes to no longer work. This will fix it (http://stytex.de/blog/2018/01/16/how-to-recover-self-hosted-kubeadm-kubernetes-cluster-after-reboot/)
git clone https://github.com/xetys/k8s-self-hosted-recovery
cd k8s-self-hosted-recovery
chmod +x install.sh
./install.sh
cd ..
kubeadm init --pod-network-cidr=10.244.0.0/16

dalam hal ini Anda tidak meneruskan --config jadi konfigurasi Anda cukup banyak:
kubeadm config print-default + perubahan CIDR.

apa yang terjadi jika Anda menggunakan CNI yang berbeda?
(Saya tahu itu berfungsi sebelum reboot tetapi hanya menguji ...).

pertama ini tanpa CIDR:

# deploy kubernetes
kubeadm init

kemudian:

# deploy pod network (weave)
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl taint nodes --all node-role.kubernetes.io/master- # allow pods to be scheduled on master

edit: mainkan dengan menghapus garis noda:
kubectl taint nodes --all node-role.kubernetes.io/master- ...

haruskah saya menjalankan kubeadm reset sebelum mencoba lagi? atau cukup jalankan kubeadm init dan kemudian muat CIDR?

haruskah saya menjalankan reset kubeadm sebelum mencoba lagi?

ya, kecuali jika Anda punya alasan untuk tidak mau?

Saya menjalankan kubeadm reset dan kemudian terjadi kesalahan setelah [init] this might take a minute or longer if the control plane images have to be pulled :

[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0727 16:36:29.663874   26847 kernel_validator.go:81] Validating kernel version
I0727 16:36:29.664814   26847 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubernetes kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.19]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kubernetes localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kubernetes localhost] and IPs [192.168.1.19 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled


                Unfortunately, an error has occurred:
                        timed out waiting for the condition

                This error is likely caused by:
                        - The kubelet is not running
                        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
                        - No internet connection is available so the kubelet cannot pull or find the following control plane images:
                                - k8s.gcr.io/kube-apiserver-amd64:v1.11.1
                                - k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
                                - k8s.gcr.io/kube-scheduler-amd64:v1.11.1
                                - k8s.gcr.io/etcd-amd64:3.2.18
                                - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
                                  are downloaded locally and cached.

                If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                        - 'systemctl status kubelet'
                        - 'journalctl -xeu kubelet'

                Additionally, a control plane component may have crashed or exited when started by the container runtime.
                To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
                Here is one example how you may list all Kubernetes containers running in docker:
                        - 'docker ps -a | grep kube | grep -v pause'
                        Once you have found the failing container, you can inspect its logs with:
                        - 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

Berikut adalah output kubeadm reset :

[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] removing kubernetes-managed containers
[reset] cleaning up running containers using crictl with socket /var/run/dockershim.sock
[reset] failed to list running pods using crictl: exit status 1. Trying to use docker instead[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
       Unfortunately, an error has occurred:
                   timed out waiting for the condition

Saya telah melihat ini beberapa hari terakhir dari pengguna, untuk beberapa alasan ....

dapatkah Anda menarik gambar terlebih dahulu menggunakan:

kubeadm config images pull --kubernetes-version 1.11.1

lalu coba lagi init .
jika gagal lagi apa yang dikatakan journalctl tentang kubelet?

berikut adalah keluaran dari kubeadm config images pull --kubernetes-version 1.11.1 :

[root<strong i="7">@kubernetes</strong> ~]# kubeadm config images pull --kubernetes-version 1.11.1
[config/images] Pulled k8s.gcr.io/kube-apiserver-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-scheduler-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-proxy-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd-amd64:3.2.18
[config/images] Pulled k8s.gcr.io/coredns:1.1.3

Kemudian menjalankan kubeadm reset :

[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] removing kubernetes-managed containers
[reset] cleaning up running containers using crictl with socket /var/run/dockershim.sock
[reset] failed to list running pods using crictl: exit status 1. Trying to use docker instead[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

Kemudian kubeadm init :

[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0727 17:02:09.000020    3408 kernel_validator.go:81] Validating kernel version
I0727 17:02:09.000209    3408 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubernetes kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.19]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kubernetes localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kubernetes localhost] and IPs [192.168.1.19 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled

                Unfortunately, an error has occurred:
                        timed out waiting for the condition

                This error is likely caused by:
                        - The kubelet is not running
                        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
                        - No internet connection is available so the kubelet cannot pull or find the following control plane images:
                                - k8s.gcr.io/kube-apiserver-amd64:v1.11.1
                                - k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
                                - k8s.gcr.io/kube-scheduler-amd64:v1.11.1
                                - k8s.gcr.io/etcd-amd64:3.2.18
                                - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
                                  are downloaded locally and cached.

                If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                        - 'systemctl status kubelet'
                        - 'journalctl -xeu kubelet'

                Additionally, a control plane component may have crashed or exited when started by the container runtime.
                To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
                Here is one example how you may list all Kubernetes containers running in docker:
                        - 'docker ps -a | grep kube | grep -v pause'
                        Once you have found the failing container, you can inspect its logs with:
                        - 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

journalctl entri terakhir:

Jul 27 17:03:00 kubernetes kubelet[3636]: I0727 17:03:00.604819    3636 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)"
Jul 27 17:03:00 kubernetes kubelet[3636]: I0727 17:03:00.604946    3636 kuberuntime_manager.go:767] Back-off 20s restarting failed container=etcd pod=etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)
Jul 27 17:03:00 kubernetes kubelet[3636]: E0727 17:03:00.604982    3636 pod_workers.go:186] Error syncing pod 6fd4d3c9fe373df920ce5e1e4572fd1d ("etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 20s restarting failed container=etcd pod=etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)"
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.175841    3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.179832    3636 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.180286    3636 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.414597    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.415518    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.416549    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.428500    3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.732777    3636 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.1 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=192.168.1.19 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:etc-pki ReadOnly:true MountPath:/etc/pki SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:192.168.1.19,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.732893    3636 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.325269    3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.325333    3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:02 kubernetes kubelet[3636]: W0727 17:03:02.329421    3636 status_manager.go:482] Failed to get status for pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.415253    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.416228    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.417254    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.629551    3636 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.1 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=192.168.1.19 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:etc-pki ReadOnly:true MountPath:/etc/pki SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:192.168.1.19,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.629669    3636 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.629850    3636 kuberuntime_manager.go:767] Back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.629891    3636 pod_workers.go:186] Error syncing pod 32544bee4c007108f4b6c54da83cc67e ("kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"
Jul 27 17:03:03 kubernetes kubelet[3636]: E0727 17:03:03.415923    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:03 kubernetes kubelet[3636]: E0727 17:03:03.416815    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:03 kubernetes kubelet[3636]: E0727 17:03:03.417995    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:04 kubernetes kubelet[3636]: E0727 17:03:04.416640    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:04 kubernetes kubelet[3636]: E0727 17:03:04.417583    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:04 kubernetes kubelet[3636]: E0727 17:03:04.418535    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.417293    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.418283    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.419337    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: W0727 17:03:05.511995    3636 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.512167    3636 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:03:06 kubernetes kubelet[3636]: I0727 17:03:06.396897    3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:06 kubernetes kubelet[3636]: W0727 17:03:06.401380    3636 status_manager.go:482] Failed to get status for pod "etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: E0727 17:03:06.417876    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: E0727 17:03:06.418801    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: E0727 17:03:06.419836    3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: I0727 17:03:06.577952    3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:06 kubernetes kubelet[3636]: W0727 17:03:06.582165    3636 status_manager.go:482] Failed to get status for pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused

Saya belum dapat menambahkan pod jaringan apa pun.

Saya belum dapat menambahkan pod jaringan apa pun.

ya, ini gagal sebelumnya.

pull images bekerja menyiratkan bahwa Anda memiliki konektivitas ke keranjang gcr.io .

tolong, restart kubelet secara manual dan lihat apa yang ditampilkan log:

systemctl restart kubelet

systemctl status kubelet # <---- ?
journalctl -xeu kubelet   # <---- ?

Saya kehabisan ide.

  • Menjalankan systemctl restart kubelet
  • systemctl status kubelet :
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Fri 2018-07-27 17:22:15 CEST; 6s ago
     Docs: http://kubernetes.io/docs/
 Main PID: 10401 (kubelet)
   CGroup: /system.slice/kubelet.service
           └─10401 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network...

Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.636513   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.482761   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&li...: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.483784   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkuberne...: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.484750   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.483505   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&li...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.484300   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkuberne...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.485469   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: W0727 17:22:20.623173   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.623379   10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.058116   10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Hint: Some lines were ellipsized, use -l to show in full.

journalctl -xeu kubelet | kurang:

-- Subject: Unit kubelet.service has begun shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun shutting down.
Jul 27 17:22:15 kubernetes systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Jul 27 17:22:15 kubernetes systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Jul 27 17:22:15 kubernetes kubelet[10401]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more informati
on.
Jul 27 17:22:15 kubernetes kubelet[10401]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more informati
on.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.439279   10401 server.go:408] Version: v1.11.1
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.439636   10401 plugins.go:97] No cloud provider specified.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.443829   10401 certificate_store.go:131] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.478490   10401 server.go:648] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.478906   10401 container_manager_linux.go:243] container manager verified user specified cgroup-root exists: []
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.478931   10401 container_manager_linux.go:248] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479017   10401 container_manager_linux.go:267] Creating device plugin manager: true
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479052   10401 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479103   10401 state_mem.go:84] [cpumanager] updated default cpuset: ""
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479113   10401 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479206   10401 kubelet.go:274] Adding pod path: /etc/kubernetes/manifests
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479256   10401 kubelet.go:299] Watching apiserver
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.479973   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.480000   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.480053   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.484688   10401 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.484709   10401 client.go:104] Start docker client with request timeout=2m0s
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.486438   10401 docker_service.go:545] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.486467   10401 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.486599   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.488739   10401 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.488845   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.488882   10401 docker_service.go:253] Docker cri networking managed by cni
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.499501   10401 docker_service.go:258] Docker Info: &{ID:V36L:ETJO:IECX:PJF4:G3GB:JHA6:LGCF:VQBJ:D2GY:PVFO:567O:545Y Containers:8 ContainersRunning:6 ContainersPaused:0 ContainersStopped:2 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:47 SystemTime:2018-07-27T17:22:15.493615517+02:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-862.9.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc420f3e000 NCPU:12 MemTotal:33386934272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]} runc:{Path:docker-runc Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc420f48000} LiveRestoreEnabled:false Isolation: InitBinary:/usr/libexec/docker/docker-init-current ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:fec3683b971d9c3ef73f284f176672c44b448662 Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux]}
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.499629   10401 docker_service.go:271] Setting cgroupDriver to systemd
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.510960   10401 kuberuntime_manager.go:186] Container runtime docker initialized, version: 1.13.1, apiVersion: 1.26.0
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.511524   10401 csi_plugin.go:111] kubernetes.io/csi: plugin initializing...
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.512249   10401 server.go:129] Starting to listen on 0.0.0.0:10250
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.512297   10401 kubelet.go:1261] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.512389   10401 server.go:986] Started kubelet
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.512717   10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513432   10401 server.go:302] Adding debug handlers to kubelet server.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513637   10401 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513742   10401 status_manager.go:152] Starting to sync pod status with apiserver
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513810   10401 kubelet.go:1758] Starting kubelet main sync loop.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513901   10401 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.516479   10401 volume_manager.go:247] Starting Kubelet Volume Manager
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.516557   10401 desired_state_of_world_populator.go:130] Desired state populator starts to run
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.518992   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.519170   10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.557259   10401 container.go:393] Failed to create summary reader for "/system.slice/docker.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.557505   10401 container.go:393] Failed to create summary reader for "/system.slice/irqbalance.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.557765   10401 container.go:393] Failed to create summary reader for "/system.slice/sshd.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.561683   10401 container.go:393] Failed to create summary reader for "/system.slice/k8s-self-hosted-recover.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.565386   10401 container.go:393] Failed to create summary reader for "/system.slice/systemd-udevd.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.565666   10401 container.go:393] Failed to create summary reader for "/system.slice/tuned.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.567498   10401 container.go:393] Failed to create summary reader for "/system.slice/auditd.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.567815   10401 container.go:393] Failed to create summary reader for "/system.slice/system-getty.slice": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.568113   10401 container.go:393] Failed to create summary reader for "/system.slice/systemd-journald.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.568957   10401 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.569544   10401 container.go:393] Failed to create summary reader for "/system.slice/polkit.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.572074   10401 container.go:393] Failed to create summary reader for "/system.slice/rsyslog.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.572295   10401 container.go:393] Failed to create summary reader for "/system.slice/systemd-logind.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.575556   10401 container.go:393] Failed to create summary reader for "/system.slice/dbus.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.580473   10401 container.go:393] Failed to create summary reader for "/system.slice/crond.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.580825   10401 container.go:393] Failed to create summary reader for "/system.slice/lvm2-lvmetad.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.614059   10401 kubelet.go:1775] skipping pod synchronization - [container runtime is down]
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.614738   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.617620   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.618177   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.618610   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.620065   10401 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.620083   10401 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.620097   10401 policy_none.go:42] [cpumanager] none policy: Start
Jul 27 17:22:15 kubernetes kubelet[10401]: Starting Device Plugin manager
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621716   10401 container_manager_linux.go:792] CPUAccounting not enabled for pid: 986
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621726   10401 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 986
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621797   10401 container_manager_linux.go:792] CPUAccounting not enabled for pid: 10401
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621804   10401 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 10401
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.622220   10401 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.818778   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.822804   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.823257   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: I0727 17:22:16.223461   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:16 kubernetes kubelet[10401]: I0727 17:22:16.227305   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.227718   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.480650   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.481646   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.482678   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: I0727 17:22:17.027967   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:17 kubernetes kubelet[10401]: I0727 17:22:17.031757   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.032125   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.481348   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.482224   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.483219   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.482059   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.483034   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.484060   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: I0727 17:22:18.632340   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:18 kubernetes kubelet[10401]: I0727 17:22:18.636096   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.636513   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.482761   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.483784   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.484750   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.483505   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.484300   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.485469   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: W0727 17:22:20.623173   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.623379   10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.058116   10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.484293   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.485115   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.486258   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:21 kubernetes kubelet[10401]: I0727 17:22:21.836741   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:21 kubernetes kubelet[10401]: I0727 17:22:21.840909   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.841341   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:22 kubernetes kubelet[10401]: E0727 17:22:22.484956   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:22 kubernetes kubelet[10401]: E0727 17:22:22.485979   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:22 kubernetes kubelet[10401]: E0727 17:22:22.487497   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:23 kubernetes kubelet[10401]: E0727 17:22:23.485724   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:23 kubernetes kubelet[10401]: E0727 17:22:23.486543   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:23 kubernetes kubelet[10401]: E0727 17:22:23.488107   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:24 kubernetes kubelet[10401]: E0727 17:22:24.486391   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:24 kubernetes kubelet[10401]: E0727 17:22:24.487305   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:24 kubernetes kubelet[10401]: E0727 17:22:24.488708   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.487061   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.488108   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.489302   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.622507   10401 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 17:22:25 kubernetes kubelet[10401]: W0727 17:22:25.624576   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.624815   10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:26 kubernetes kubelet[10401]: E0727 17:22:26.487846   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:26 kubernetes kubelet[10401]: E0727 17:22:26.488682   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:26 kubernetes kubelet[10401]: E0727 17:22:26.489896   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:27 kubernetes kubelet[10401]: E0727 17:22:27.488510   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:27 kubernetes kubelet[10401]: E0727 17:22:27.489607   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:27 kubernetes kubelet[10401]: E0727 17:22:27.490628   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: I0727 17:22:28.241605   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:28 kubernetes kubelet[10401]: I0727 17:22:28.245892   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.246319   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.489234   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.490271   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.491127   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:29 kubernetes kubelet[10401]: E0727 17:22:29.489955   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:29 kubernetes kubelet[10401]: E0727 17:22:29.490777   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:29 kubernetes kubelet[10401]: E0727 17:22:29.491942   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.490671   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.491715   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.492608   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: W0727 17:22:30.626001   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.626177   10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.058799   10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.491375   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.492253   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.493403   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:32 kubernetes kubelet[10401]: E0727 17:22:32.492126   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:32 kubernetes kubelet[10401]: E0727 17:22:32.493049   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:32 kubernetes kubelet[10401]: E0727 17:22:32.494127   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:33 kubernetes kubelet[10401]: E0727 17:22:33.492838   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:33 kubernetes kubelet[10401]: E0727 17:22:33.493750   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:33 kubernetes kubelet[10401]: E0727 17:22:33.494858   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:34 kubernetes kubelet[10401]: E0727 17:22:34.493558   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:34 kubernetes kubelet[10401]: E0727 17:22:34.494514   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:34 kubernetes kubelet[10401]: E0727 17:22:34.495496   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.246559   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.251191   10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.251606   10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.493606   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.494144   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.495144   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.496164   10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.530217   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.530338   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.534682   10401 status_manager.go:482] Failed to get status for pod "etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.552571   10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/6fd4d3c9fe373df920ce5e1e4572fd1d-etcd-certs") pod "etcd-kubernetes" (UID: "6fd4d3c9fe373df920ce5e1e4572fd1d")
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.552633   10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/6fd4d3c9fe373df920ce5e1e4572fd1d-etcd-data") pod "etcd-kubernetes" (UID: "6fd4d3c9fe373df920ce5e1e4572fd1d")
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.563624   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.563750   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.567753   10401 status_manager.go:482] Failed to get status for pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.596873   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.596962   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.601046   10401 status_manager.go:482] Failed to get status for pod "kube-controller-manager-kubernetes_kube-system(fe4b0bda62e8e0df1386dc034ba16ee3)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.622704   10401 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.627154   10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.627334   10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.630351   10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.634273   10401 status_manager.go:482] Failed to get status for pod "kube-scheduler-kubernetes_kube-system(537879acc30dd5eff5497cb2720a6d64)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.652980   10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/32544bee4c007108f4b6c54da83cc67e-k8s-certs") pod "kube-apiserver-kubernetes" (UID: "32544bee4c007108f4b6c54da83cc67e")
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.653036   10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/fe4b0bda62e8e0df1386dc034ba16ee3-ca-certs") pod "kube-controller-manager-kubernetes" (UID: "fe4b0bda62e8e0df1386dc034ba16ee3")

apa yang terjadi jika Anda menghentikan semua yang berhubungan dengan kubernetes di node ini dan mencoba menjalankan server kecil yang mendengarkan 192.168.1.19:6443 ?

kubeadm reset
systemctl stop kubelet

netstat -tulpn
untuk memastikan tidak ada yang mendengarkan di sana.

tulis ini ke file test.go

package main
import (
  "net/http"
  "strings"
)
func sayHello(w http.ResponseWriter, r *http.Request) {
  message := r.URL.Path
  message = strings.TrimPrefix(message, "/")
  message = "Hello " + message
  w.Write([]byte(message))
}
func main() {
  http.HandleFunc("/", sayHello)
  if err := http.ListenAndServe("192.168.1.19:6443", nil); err != nil {
    panic(err)
  }
}

go run test.go

$ curl 192.168.1.19:6443
Apakah itu bekerja?

Saya tahu ini konyol, tapi saya kehabisan ide.

Output dari netstat -tulpn :

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      989/sshd
tcp6       0      0 :::22                   :::*                    LISTEN      989/sshd
udp        0      0 0.0.0.0:68              0.0.0.0:*                           807/dhclient

keluaran curl setelah menjalankan test.go :

[pierrick<strong i="12">@kubernetes</strong> ~]$ curl 192.168.1.19:6443
Hello [pierrick<strong i="13">@kubernetes</strong> ~]$

baik itu berhasil!
saya minta maaf tapi saya tidak bisa membantu lebih banyak hari ini.

mungkin orang lain bisa melihat log dan mencari tahu apa yang terjadi?

Adakah yang secara khusus Anda pikirkan? Bisakah Anda menandai mereka dalam masalah ini?

@ kubernetes / sig-cluster-lifycle-bugs

@ PierrickI3 Dapatkah Anda memeriksa apakah server aplikasi mendengarkan pada port 6443? Jika ya, dapatkah Anda memeriksa pengaturan proxy Anda (jika ada)?

netstat -tulpn |grep 6443
set |grep -i proxy

@ bart0sh Terima kasih, tetapi saya harus memulai akhir pekan ini sehingga saya tidak dapat lagi memecahkan masalah ini. Saya akan menutup masalah ini.

FYI, output dari netstat -tulpn ditunjukkan di atas (https://github.com/kubernetes/kubeadm/issues/1026#issuecomment-408457991). Saya tidak memiliki proxy (koneksi langsung ke internet).

Mengapa ini ditutup? Saya menghadapi masalah serupa di VM VirtualBox saya dengan Centos 7 ..
Jadi satu-satunya solusi adalah membuat kotak baru sekaligus ???

@ PierrickI3 Saya mengalami masalah yang sama persis dan kebanyakan log serupa. Apakah Anda mencapai solusi atau solusi ??

@ sayangnya tidak. Saya menyerah dan pindah kembali ke minikube untuk tujuan dev. Tetapi saya ingin kembali ke kubeadm jika solusi ditemukan.

@ PierrickI3 ok .. ketika saya mencapai sesuatu maka saya akan memberi tahu Anda

masalah yang sama menjalankan Ubuntu 16 di VMWare.

masalah yang sama berjalan di RHEL 7.5

masalah yang sama berjalan di cluster Centos7

sama dengan cluster Kubic

coba ini di setiap node @ PierrickI3

systemctl menghentikan kubelet
systemctl menghentikan buruh pelabuhan
iptables --flush
iptables -tnat --flush
systemctl mulai kubelet
systemctl mulai buruh pelabuhan

Mungkin Anda lupa menjalankan ini setelah menginstal @ PierrickI3 :

sudo cp /etc/kubernetes/admin.conf $ BERANDA /
sudo chown $ (id -u): $ (id -g) $ HOME / admin.conf
ekspor KUBECONFIG = $ HOME / admin.conf

Kami mengalami masalah yang sama persis pada RHEL 7. Ini baru saja berhenti bekerja begitu saja. Ini penting bagi kami.

aku harus lari
sudo cp /etc/kubernetes/admin.conf $ BERANDA /
sudo chown $ (id -u): $ (id -g) $ HOME / admin.conf
ekspor KUBECONFIG = $ HOME / admin.conf

dll., dll., tetapi node master menolak koneksi bahkan ke semua node di cluster kita.

saya tidak pernah dapat mereproduksi masalah ini, cluster baru saja kembali dengan baik setelah 2-3 menit setelah reboot.
harus sesuatu yang berhubungan dengan pengaturan individu.

Sepertinya server API saya tidak lagi berjalan. Saya sudah mencoba me-restart kubelet beberapa kali, semuanya tidak berhasil. Lucunya, ini telah berhasil dan saya belum menyentuhnya sampai kemarin ketika saya akan menambahkan lebih banyak gambar buruh pelabuhan untuk cluster kami.

Saya tahu apa yang sedang terjadi. Direktori yang dipasang /var telah penuh. Sekarang bekerja seperti yang diharapkan.

Saya pikir karena aturan untuk port 6443 ditambahkan ke iptables dan karena saya mendapatkan koneksi terus-menerus ditolak, bahkan dari localhost dan sejak melakukan docker ps tidak menghasilkan kontainer yang berjalan, bahwa layanan API (dan layanan lainnya) ) tidak berjalan, artinya sesuatu yang ganjil sedang terjadi, dan tentu saja ... hal yang aneh tidak ada dalam log kubectl menunjukkan mengapa layanan API gagal dimulai.

Saya tahu apa yang sedang terjadi. Direktori yang dipasang /var telah penuh.

OMG TERIMA KASIH !!!

status systemctl kubelet cmd menunjukkan info kesalahan
"27 Jul 17:22:20 kubernetes kubelet [10401]: W0727 17: 22: 20.623173 10401 cni.go: 172] Tidak dapat memperbarui konfigurasi cni: Tidak ada jaringan yang ditemukan di /etc/cni/net.d
27 Jul 17:22:20 kubernetes kubelet [10401]: E0727 17: 22: 20.623379 10401 kubelet.go: 2110] Jaringan runtime container tidak siap: NetworkReady = alasan salah pesan: docker : plugin jaringan tidak siap: konfigurasi cni tidak diinisialisasi
27 Jul 17:22:21 kubernetes kubelet [10401]: E0727 17: 22: 21.058116 10401 event.go: 212] Tidak dapat menulis acara: 'Posting https://192.168.1.19 : 6443 / api / v1 / namespaces / default / peristiwa: panggil tcp 192.168.1.19:6443: sambungkan: sambungan ditolak '(dapat coba lagi setelah tidur) "

Begitu

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
tambahkan parameter ini: --network-plugin = cni --cni-conf-dir = / etc / cni / --cni-bin-dir = / opt / cni / bin
lalu
kubeadm reset
kubeadm init

coba ini di setiap node @ PierrickI3

systemctl menghentikan kubelet
systemctl menghentikan buruh pelabuhan
iptables --flush
iptables -tnat --flush
systemctl mulai kubelet
systemctl mulai buruh pelabuhan

Ini memperbaiki masalah saya, meskipun saya hanya melakukan ini di node master dan baru saja melakukan reset kubeadm di node lain

Terima kasih @ manoj-bandara tetapi saya menyerah untuk saat ini dan akan kembali ke ini nanti tahun ini. Lihat https://github.com/kubernetes/kubeadm/issues/1026#issuecomment -420948092

masalah yang sama di sini - mencoba banyak hal. sepertinya inilah masalahnya:
failed: open /run/systemd/resolve/resolv.conf: no such file or directory

saya menghubungkan /etc/resolv.conf ke lokasi itu dan memulai ulang kubele (hanya di master) lalu master dan semua node mulai muncul lagi.

Dalam kasus saya itu adalah masalah swap, perbaiki dengan mematikan swap

sudo swapoff -a
sudo systemctl restart kubelet.service

coba ini di setiap node @ PierrickI3

systemctl menghentikan kubelet
systemctl menghentikan buruh pelabuhan
iptables --flush
iptables -tnat --flush
systemctl mulai kubelet
systemctl mulai buruh pelabuhan

OMG, perintah luar biasa !! Terima kasih banyak !!

coba ini di setiap node @ PierrickI3

systemctl menghentikan kubelet
systemctl menghentikan buruh pelabuhan
iptables --flush
iptables -tnat --flush
systemctl mulai kubelet
systemctl mulai buruh pelabuhan

OMG, perintah luar biasa !! Terima kasih banyak !!

@JTRNEO , saya harus mengingat yang ini juga :)

Saya menghadapi masalah yang sama beberapa instalasi. Saya belum menemukan jawabannya. Saya mencoba driver Flanel, Calico dan Wave dan hasilnya sama. Jadi mungkin masalahnya tidak terkait dengan plugin jaringan.
Saya tidak yakin ini terkait dengan masalah tetapi meskipun saya menjalankan "sudo swapoff -a", setelah restart server merekomendasikan saya untuk mematikan swap lagi.

Masalah yang sama di sini. Log kube-apiserver saya diisi dengan:

I1124 18:44:13.180672       1 log.go:172] http: TLS handshake error from 192.168.1.235:56160: EOF
I1124 18:44:13.186601       1 log.go:172] http: TLS handshake error from 192.168.1.235:56244: EOF
I1124 18:44:13.201880       1 log.go:172] http: TLS handshake error from 192.168.1.192:56340: EOF
I1124 18:44:13.208991       1 log.go:172] http: TLS handshake error from 192.168.1.235:56234: EOF
I1124 18:44:13.248214       1 log.go:172] http: TLS handshake error from 192.168.1.235:56166: EOF
I1124 18:44:13.292943       1 log.go:172] http: TLS handshake error from 192.168.1.235:56272: EOF
I1124 18:44:13.332362       1 log.go:172] http: TLS handshake error from 192.168.1.235:56150: EOF
I1124 18:44:13.352911       1 log.go:172] http: TLS handshake error from 192.168.1.249:41300: EOF

Tabel ip pembilasan tidak berfungsi untuk saya dan swap saya sudah dimatikan.

Apakah Anda memasang flanel ke node pekerja?

Pada Minggu, 24 Nov 2019, 21:54 Christopher J. Bottaro [email protected]
menulis:

Masalah yang sama di sini. Log kube-apiserver saya diisi dengan:

I1124 18: 44: 13.180672 1 log.go: 172] http: kesalahan jabat tangan TLS dari 192.168.1.235:56160: EOF
I1124 18: 44: 13.186601 1 log.go: 172] http: kesalahan jabat tangan TLS dari 192.168.1.235:56244: EOF
I1124 18: 44: 13.201880 1 log.go: 172] http: kesalahan jabat tangan TLS dari 192.168.1.192:56340: EOF
I1124 18: 44: 13.208991 1 log.go: 172] http: kesalahan jabat tangan TLS dari 192.168.1.235:56234: EOF
I1124 18: 44: 13.248214 1 log.go: 172] http: kesalahan jabat tangan TLS dari 192.168.1.235:56166: EOF
I1124 18: 44: 13.292943 1 log.go: 172] http: kesalahan jabat tangan TLS dari 192.168.1.235:56272: EOF
I1124 18: 44: 13.332362 1 log.go: 172] http: kesalahan jabat tangan TLS dari 192.168.1.235:56150: EOF
I1124 18: 44: 13.352911 1 log.go: 172] http: kesalahan jabat tangan TLS dari 192.168.1.249:41300: EOF

Tabel ip pembilasan tidak berfungsi untuk saya dan swap saya sudah dimatikan.

-
Anda menerima ini karena Anda berkomentar.
Balas email ini secara langsung, lihat di GitHub
https://github.com/kubernetes/kubeadm/issues/1026?email_source=notifications&email_token=AKZB2S2OOQWTN5QH3NCIVNTQVLEYBA5CNFSM4FMNY3N2YY3PNVWWK3TUL52HS4DFVREXG43ZLO16VBFS2HS4DFVREXG43ZLO165VBFS2
atau berhenti berlangganan
https://github.com/notifications/unsubscribe-auth/AKZB2SZZTPQEVNMV77HZFSLQVLEYBANCNFSM4FMNY3NQ
.

Tidak, saya menggunakan kube-router untuk jaringan. Pod untuk itu berada dalam status crash backoff, dan berisi kesalahan tentang tidak dapat berbicara dengan server api.

Cluster ini berjalan dengan baik selama berbulan-bulan, tetapi kemudian listrik padam sehingga me-reboot mesin saya dan cluster yang rusak.

@bayu_joo
mungkinkah sertifikat klien kubelet Anda telah kedaluwarsa?

lihat peringatan kedua di sini:
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#check -certificate-expiration

Pada node yang dibuat dengan kubeadm init, sebelum kubeadm versi 1.17 ...

masalah yang sama menjalankan Ubuntu 16 di VMWare.

Saya menjalankan cluster di vmware juga, apa yang menyelesaikan masalah Anda? Terima kasih

Memiliki masalah yang sama hari ini setelah mengedit pengaturan service-cidr di cluster kube baru saya. Masalah bagi saya adalah kontainer buruh pelabuhan kube-apiserver sedang mengepak. Setelah melihat log menggunakan log buruh pelabuhan, Saya melihat:
Error: error determining service IP ranges for primary service cidr: The service cluster IP range must be at least 8 IP addresses.
Saya memiliki meskipun saya dapat menyediakan cidr / 30 yang lebih kecil untuk layanan, tetapi perlu membukanya ke / 29.

tidak yakin apakah terkait tetapi perhatikan bahwa 1.17.0 memiliki bug terkait CIDR, jadi semoga Anda menjalankan patch yang lebih baru .17.

Anda mungkin perlu mengubah aturan iptables.
jalankan CMD: iptables -L --line-numbers untuk menemukan penolakan-dengan icmp-host-dilarang
kemudian, iptables -D INPUT 153, hapus # 153 berarti nomor baris
akhirnya, mulai ulang kubelet

Saya memiliki masalah serupa menggunakan cluster kubeadm. Saya baru saja menggunakan 2x kali docker restart $ (docker ps -qa)

Memiliki masalah yang sama dengan @ PierrickI3 . Setelah reboot bidang kontrol node turun. kubelet layanan sedang berjalan mencoba menyambung ke apiserver yang tidak berjalan. etcd sedang berjalan. Tidak ada antarmuka jaringan CNI, hanya loopback, ethernet, dan buruh pelabuhan.

Tidak ada kesalahan khusus yang terlihat di mana pun, itu tidak dimulai, meskipun berfungsi dengan baik untuk waktu yang lama sebelum reboot. Mencoba semua yang disebutkan di utas ini serta di banyak lainnya.

Saya benar-benar bingung tentang apa yang memulai apa di sini untuk menyelidiki masalah lebih lanjut. Apakah kubelet service memulai jaringan CNI dan kemudian memulai bidang kontrol (mis. Api, penjadwal, dll.) Pod? Saya telah memeriksa docker ps -a dan semua container control plane belum dicoba untuk dijalankan, juga, tidak ada kebijakan restart pada container tersebut. Jadi mengapa kubelet mencoba untuk berbicara dengan server api ketika itu belum memulainya?

Memiliki masalah serupa, ini belum terselesaikan untuk saya tetapi masalah terjadi karena buruh pelabuhan ditingkatkan ke versi yang tidak kompatibel. Cukup periksa apakah layanan buruh pelabuhan Anda berjalan atau tidak.

@bayu_joo
' @cjbottaro '
mungkinkah sertifikat klien kubelet Anda telah kedaluwarsa?

lihat peringatan kedua di sini:
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#check -certificate-expiration

Pada node yang dibuat dengan kubeadm init, sebelum kubeadm versi 1.17 ...

Terima kasih Mesin saya tidak dapat terhubung setelah restart. Setelah dilakukan pengecekan sertifikat, ternyata 3 sertifikat sudah habis masa berlakunya.

[root<strong i="16">@localhost</strong> home]# kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration

W0128 14:02:50.815166   21689 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Sep 17, 2021 06:56 UTC   232d                                    no      
apiserver                  Sep 17, 2021 06:55 UTC   232d            ca                      no      
apiserver-etcd-client      Sep 17, 2021 06:55 UTC   232d            etcd-ca                 no      
apiserver-kubelet-client   Sep 17, 2021 06:55 UTC   232d            ca                      no      
controller-manager.conf    Sep 17, 2021 06:56 UTC   232d                                    no      
etcd-healthcheck-client    Dec 19, 2020 07:56 UTC   <invalid>       etcd-ca                 no      
etcd-peer                  Dec 19, 2020 07:56 UTC   <invalid>       etcd-ca                 no      
etcd-server                Dec 19, 2020 07:56 UTC   <invalid>       etcd-ca                 no      
front-proxy-client         Sep 17, 2021 06:55 UTC   232d            front-proxy-ca          no      
scheduler.conf             Sep 17, 2021 06:56 UTC   232d                                    no   

Jadi perbarui sertifikatnya

kubeadm alpha certs renew etcd-healthcheck-client
kubeadm alpha certs renew etcd-peer
kubeadm alpha certs renew etcd-server

Mulai ulang layanan

systemctl daemon-reload
systemctl restart kubelet
systemctl restart docker

Ini bekerja dengan baik

Hai,
Silakan coba untuk melihat apakah Anda mengaktifkan swap pada node master dan pekerja. Harap nonaktifkan dan mulai ulang layanan.

Apakah halaman ini membantu?
0 / 5 - 0 peringkat