äœãèµ·ãã£ãã®ãïŒ15åã®kubernetesã¯ã©ã¹ã¿ãŒã1.17.5ãã1.18.2 / 1.18.3ã«ã¢ããã°ã¬ãŒãããããŒã¢ã³ã»ãããæ£ããæ©èœããªããªã£ãããšã確èªãå§ããŸããã
åé¡ã¯ããã¹ãŠã®ããŒã¢ã³ã»ããããããããããžã§ãã³ã°ãããªãããšã§ãã 次ã®ãšã©ãŒã¡ãã»ãŒãžãã€ãã³ãã«è¿ããŸãã
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 9s (x5 over 71s) default-scheduler 0/13 nodes are available: 12 node(s) didn't match node selector.
ãã ãããã¹ãŠã®ããŒãã䜿çšå¯èœã§ãããããŒãã»ã¬ã¯ã¿ãŒã¯ãããŸããã ããŒãã«ãæ±æã¯ãããŸããã
ããŒã¢ã³ã»ããhttps://gist.github.com/zetaab/4a605cb3e15e349934cb7db29ec72bd8
% kubectl get nodes
NAME STATUS ROLES AGE VERSION
e2etest-1-kaasprod-k8s-local Ready node 46h v1.18.3
e2etest-2-kaasprod-k8s-local Ready node 46h v1.18.3
e2etest-3-kaasprod-k8s-local Ready node 44h v1.18.3
e2etest-4-kaasprod-k8s-local Ready node 44h v1.18.3
master-zone-1-1-1-kaasprod-k8s-local Ready master 47h v1.18.3
master-zone-2-1-1-kaasprod-k8s-local Ready master 47h v1.18.3
master-zone-3-1-1-kaasprod-k8s-local Ready master 47h v1.18.3
nodes-z1-1-kaasprod-k8s-local Ready node 47h v1.18.3
nodes-z1-2-kaasprod-k8s-local Ready node 47h v1.18.3
nodes-z2-1-kaasprod-k8s-local Ready node 46h v1.18.3
nodes-z2-2-kaasprod-k8s-local Ready node 46h v1.18.3
nodes-z3-1-kaasprod-k8s-local Ready node 47h v1.18.3
nodes-z3-2-kaasprod-k8s-local Ready node 46h v1.18.3
% kubectl get pods -n weave -l weave-scope-component=agent -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
weave-scope-agent-2drzw 1/1 Running 0 26h 10.1.32.23 e2etest-1-kaasprod-k8s-local <none> <none>
weave-scope-agent-4kpxc 1/1 Running 3 26h 10.1.32.12 nodes-z1-2-kaasprod-k8s-local <none> <none>
weave-scope-agent-78n7r 1/1 Running 0 26h 10.1.32.7 e2etest-4-kaasprod-k8s-local <none> <none>
weave-scope-agent-9m4n8 1/1 Running 0 26h 10.1.96.4 master-zone-1-1-1-kaasprod-k8s-local <none> <none>
weave-scope-agent-b2gnk 1/1 Running 1 26h 10.1.96.12 master-zone-3-1-1-kaasprod-k8s-local <none> <none>
weave-scope-agent-blwtx 1/1 Running 2 26h 10.1.32.20 nodes-z1-1-kaasprod-k8s-local <none> <none>
weave-scope-agent-cbhjg 1/1 Running 0 26h 10.1.64.15 e2etest-2-kaasprod-k8s-local <none> <none>
weave-scope-agent-csp49 1/1 Running 0 26h 10.1.96.14 e2etest-3-kaasprod-k8s-local <none> <none>
weave-scope-agent-g4k2x 1/1 Running 1 26h 10.1.64.10 nodes-z2-2-kaasprod-k8s-local <none> <none>
weave-scope-agent-kx85h 1/1 Running 2 26h 10.1.96.6 nodes-z3-1-kaasprod-k8s-local <none> <none>
weave-scope-agent-lllqc 0/1 Pending 0 5m56s <none> <none> <none> <none>
weave-scope-agent-nls2h 1/1 Running 0 26h 10.1.96.17 master-zone-2-1-1-kaasprod-k8s-local <none> <none>
weave-scope-agent-p8njs 1/1 Running 2 26h 10.1.96.19 nodes-z3-2-kaasprod-k8s-local <none> <none>
apiserver / schedules / controller-managersãåèµ·åããããšããŸãããã圹ã«ç«ã¡ãŸããã ãŸããã¹ã¿ãã¯ããŠããåäžã®ããŒãïŒnodes-z2-1-kaasprod-k8s-localïŒãåèµ·åããããšããŸããããã©ã¡ãã圹ã«ç«ã¡ãŸããã ãã®ããŒããåé€ããŠåäœæããã ãã§åœ¹ã«ç«ã¡ãŸãã
% kubectl describe node nodes-z2-1-kaasprod-k8s-local
Name: nodes-z2-1-kaasprod-k8s-local
Roles: node
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=59cf4871-de1b-4294-9e9f-2ea7ca4b771f
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=regionOne
failure-domain.beta.kubernetes.io/zone=zone-2
kops.k8s.io/instancegroup=nodes-z2
kubernetes.io/arch=amd64
kubernetes.io/hostname=nodes-z2-1-kaasprod-k8s-local
kubernetes.io/os=linux
kubernetes.io/role=node
node-role.kubernetes.io/node=
node.kubernetes.io/instance-type=59cf4871-de1b-4294-9e9f-2ea7ca4b771f
topology.cinder.csi.openstack.org/zone=zone-2
topology.kubernetes.io/region=regionOne
topology.kubernetes.io/zone=zone-2
Annotations: csi.volume.kubernetes.io/nodeid: {"cinder.csi.openstack.org":"faf14d22-010f-494a-9b34-888bdad1d2df"}
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 10.1.64.32/19
projectcalico.org/IPv4IPIPTunnelAddr: 100.98.136.0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 28 May 2020 13:28:24 +0300
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: nodes-z2-1-kaasprod-k8s-local
AcquireTime: <unset>
RenewTime: Sat, 30 May 2020 12:02:13 +0300
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Fri, 29 May 2020 09:40:51 +0300 Fri, 29 May 2020 09:40:51 +0300 CalicoIsUp Calico is running on this node
MemoryPressure False Sat, 30 May 2020 11:59:53 +0300 Fri, 29 May 2020 09:40:45 +0300 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 30 May 2020 11:59:53 +0300 Fri, 29 May 2020 09:40:45 +0300 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 30 May 2020 11:59:53 +0300 Fri, 29 May 2020 09:40:45 +0300 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 30 May 2020 11:59:53 +0300 Fri, 29 May 2020 09:40:45 +0300 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.1.64.32
Hostname: nodes-z2-1-kaasprod-k8s-local
Capacity:
cpu: 4
ephemeral-storage: 10287360Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8172420Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 9480830961
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8070020Ki
pods: 110
System Info:
Machine ID: c94284656ff04cf090852c1ddee7bcc2
System UUID: faf14d22-010f-494a-9b34-888bdad1d2df
Boot ID: 295dc3d9-0a90-49ee-92f3-9be45f2f8e3d
Kernel Version: 4.19.0-8-cloud-amd64
OS Image: Debian GNU/Linux 10 (buster)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.8
Kubelet Version: v1.18.3
Kube-Proxy Version: v1.18.3
PodCIDR: 100.96.12.0/24
PodCIDRs: 100.96.12.0/24
ProviderID: openstack:///faf14d22-010f-494a-9b34-888bdad1d2df
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-node-77pqs 100m (2%) 200m (5%) 100Mi (1%) 100Mi (1%) 46h
kube-system kube-proxy-nodes-z2-1-kaasprod-k8s-local 100m (2%) 200m (5%) 100Mi (1%) 100Mi (1%) 46h
volume csi-cinder-nodeplugin-5jbvl 100m (2%) 400m (10%) 200Mi (2%) 200Mi (2%) 46h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 300m (7%) 800m (20%)
memory 400Mi (5%) 400Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m27s kubelet, nodes-z2-1-kaasprod-k8s-local Starting kubelet.
Normal NodeHasSufficientMemory 7m26s kubelet, nodes-z2-1-kaasprod-k8s-local Node nodes-z2-1-kaasprod-k8s-local status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m26s kubelet, nodes-z2-1-kaasprod-k8s-local Node nodes-z2-1-kaasprod-k8s-local status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m26s kubelet, nodes-z2-1-kaasprod-k8s-local Node nodes-z2-1-kaasprod-k8s-local status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 7m26s kubelet, nodes-z2-1-kaasprod-k8s-local Updated Node Allocatable limit across pods
ããã¯ããã¹ãŠã®ã¯ã©ã¹ã¿ãŒã§ã©ã³ãã ã«èŠãããŸãã
äºæ³ãããããšïŒdaemonsetããã¹ãŠã®ããŒãã«ããããžã§ãã³ã°
ãããåçŸããæ¹æ³ïŒå¯èœãªéãæå°éãã€æ£ç¢ºã«ïŒ ïŒå®éã«ã¯ããããŸããã1.18.xkubernetesãã€ã³ã¹ããŒã«ããdaemonsetããããã€ããŠããã®åŸæ°æ¥åŸ ã¡ãŸãïŒïŒïŒ
ä»ã«ç¥ã£ãŠããã¹ãããšã¯ãããŸããïŒ ïŒãããçºçãããšããã®ããŒãã«ä»ã®ããŒã¢ã³ã»ãããããããžã§ãã³ã°ããããšãã§ããªããªããŸãã ããªããèŠãããšãã§ããããã«ããã®ã³ã°ã®æµæ¢ãªããããæ¬ ããŠããŸãã ãã®ããŒãã®kubeletãã°ã«ãšã©ãŒã衚瀺ãããªãã®ã§ãåè¿°ã®ããã«ãåèµ·åããŠã圹ã«ç«ã¡ãŸããã
% kubectl get ds --all-namespaces
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
falco falco-daemonset 13 13 12 13 12 <none> 337d
kube-system audit-webhook-deployment 3 3 3 3 3 node-role.kubernetes.io/master= 174d
kube-system calico-node 13 13 13 13 13 kubernetes.io/os=linux 36d
kube-system kops-controller 3 3 3 3 3 node-role.kubernetes.io/master= 193d
kube-system metricbeat 6 6 5 6 5 <none> 35d
kube-system openstack-cloud-provider 3 3 3 3 3 node-role.kubernetes.io/master= 337d
logging fluent-bit 13 13 12 13 12 <none> 337d
monitoring node-exporter 13 13 12 13 12 kubernetes.io/os=linux 58d
volume csi-cinder-nodeplugin 6 6 6 6 6 <none> 239d
weave weave-scope-agent 13 13 12 13 12 <none> 193d
weave weavescope-iowait-plugin 6 6 5 6 5 <none> 193d
ã芧ã®ãšãããã»ãšãã©ã®ããŒã¢ã³ã»ããã«ã¯1ã€ã®ãããããããŸããã
ç°å¢ïŒ
kubectl version
ïŒïŒ1.18.3cat /etc/os-release
ïŒïŒDebianãã¹ã¿ãŒuname -a
ïŒïŒLinuxããŒã-z2-1-kaasprod-k8s-local 4.19.0-8-cloud-amd64ïŒ1 SMP Debian 4.19.98-1 + deb10u1ïŒ2020-04-27ïŒ x86_64 GNU / Linux/ sigã¹ã±ãžã¥ãŒãªã³ã°
ããŒããããŒã¢ã³ã»ããããµã³ãã«ããããããã³ãµãŒããŒããååŸããåå空éã®å®å šãªyamlãæäŸã§ããŸããïŒ
ããŒãïŒ
https://gist.github.com/zetaab/2a7e8d3fe6cb42a617e17abc0fa375f7
ããŒã¢ã³ã»ããïŒ
https://gist.github.com/zetaab/31bb406c8bd622b3017bf4f468d0154f
ãããã®äŸïŒåäœäžïŒïŒ
https://gist.github.com/zetaab/814871bec6f2879e371f5bbdc6f2e978
ãããã®äŸïŒã¹ã±ãžã¥ãŒãªã³ã°ã§ã¯ãããŸããïŒïŒ
https://gist.github.com/zetaab/f3488d65486c745af78dbe2e6173fd42
åå空éïŒ
https://gist.github.com/zetaab/4625b759f4e21b50757c79e5072cd7d9
DaemonSetãããã¯ãåäžã®ããŒãã«ã®ã¿äžèŽããnodeAffinityã»ã¬ã¯ã¿ãŒã§ã¹ã±ãžã¥ãŒã«ããããããã13ã®ãã¡12ãäžèŽããŸããã§ããããšããã¡ãã»ãŒãžã衚瀺ãããŸãã
ã¹ã±ãžã¥ãŒã©ãŒãããã/ããŒãã®çµã¿åããã«äžæºãæ±ãçç±ãããããŸããâŠpodspecã§ç«¶åããå¯èœæ§ã®ããããŒãããªããããŒããã¹ã±ãžã¥ãŒã«äžèœãŸãã¯æ±æãããŠããããååãªãªãœãŒã¹ããããŸã
ããŠã3ã€ãã¹ãŠã®ã¹ã±ãžã¥ãŒã©ãŒãåèµ·åããŸããïŒããã«äœãé¢çœããã®ãèŠãããå Žåã¯ããã°ã¬ãã«ã4ã«å€æŽããŸããïŒã ãã ããåé¡ã¯ä¿®æ£ãããŸãã
% kubectl get ds --all-namespaces
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
falco falco-daemonset 13 13 13 13 13 <none> 338d
kube-system audit-webhook-deployment 3 3 3 3 3 node-role.kubernetes.io/master= 175d
kube-system calico-node 13 13 13 13 13 kubernetes.io/os=linux 36d
kube-system kops-controller 3 3 3 3 3 node-role.kubernetes.io/master= 194d
kube-system metricbeat 6 6 6 6 6 <none> 36d
kube-system openstack-cloud-provider 3 3 3 3 3 node-role.kubernetes.io/master= 338d
logging fluent-bit 13 13 13 13 13 <none> 338d
monitoring node-exporter 13 13 13 13 13 kubernetes.io/os=linux 59d
volume csi-cinder-nodeplugin 6 6 6 6 6 <none> 239d
weave weave-scope-agent 13 13 13 13 13 <none> 194d
weave weavescope-iowait-plugin 6 6 6 6 6 <none> 194d
ããã§ããã¹ãŠã®ããŒã¢ã³ã»ãããæ£ããããããžã§ãã³ã°ãããŸãã å¥åŠãªããšã«ããã¹ã±ãžã¥ãŒã©ãŒã«äœãåé¡ãããããã§ã
cc @ kubernetes / sig-scheduling-bugs @ ahg-g
v1.18.3ã§ãåæ§ã®åé¡ãçºçããããŒã¢ã³ã»ãããããã«å¯ŸããŠ1ã€ã®ããŒããã¹ã±ãžã¥ãŒã«ã§ããŸããã
åèµ·åã¹ã±ãžã¥ãŒã©ã圹ç«ã¡ãŸãã
[root@tesla-cb0434-csfp1-csfp1-control-03 ~]# kubectl get pod -A|grep Pending
kube-system coredns-vc5ws 0/1 Pending 0 2d16h
kube-system local-volume-provisioner-mwk88 0/1 Pending 0 2d16h
kube-system svcwatcher-ltqb6 0/1 Pending 0 2d16h
ncms bcmt-api-hfzl6 0/1 Pending 0 2d16h
ncms bcmt-yum-repo-589d8bb756-5zbvh 0/1 Pending 0 2d16h
[root@tesla-cb0434-csfp1-csfp1-control-03 ~]# kubectl get ds -A
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system coredns 3 3 2 3 2 is_control=true 2d16h
kube-system danmep-cleaner 0 0 0 0 0 cbcs.nokia.com/danm_node=true 2d16h
kube-system kube-proxy 8 8 8 8 8 <none> 2d16h
kube-system local-volume-provisioner 8 8 7 8 7 <none> 2d16h
kube-system netwatcher 0 0 0 0 0 cbcs.nokia.com/danm_node=true 2d16h
kube-system sriov-device-plugin 0 0 0 0 0 sriov=enabled 2d16h
kube-system svcwatcher 3 3 2 3 2 is_control=true 2d16h
ncms bcmt-api 3 3 0 3 0 is_control=true 2d16h
[root@tesla-cb0434-csfp1-csfp1-control-03 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
tesla-cb0434-csfp1-csfp1-control-01 Ready <none> 2d16h v1.18.3
tesla-cb0434-csfp1-csfp1-control-02 Ready <none> 2d16h v1.18.3
tesla-cb0434-csfp1-csfp1-control-03 Ready <none> 2d16h v1.18.3
tesla-cb0434-csfp1-csfp1-edge-01 Ready <none> 2d16h v1.18.3
tesla-cb0434-csfp1-csfp1-edge-02 Ready <none> 2d16h v1.18.3
tesla-cb0434-csfp1-csfp1-worker-01 Ready <none> 2d16h v1.18.3
tesla-cb0434-csfp1-csfp1-worker-02 Ready <none> 2d16h v1.18.3
tesla-cb0434-csfp1-csfp1-worker-03 Ready <none> 2d16h v1.18.3
åçŸããæ¹æ³ãç¥ããã«ãããã°ããã®ã¯é£ããã ãããã®ã¹ã±ãžã¥ãŒã«ã«å€±æããå Žåã«åããŠãã¹ã±ãžã¥ãŒã©ãã°ããããŸããïŒ
ããŠã3ã€ãã¹ãŠã®ã¹ã±ãžã¥ãŒã©ãŒãåèµ·åããŸãã
ãã®ãã¡ã®1ã€ã ããdefault-scheduler
ãšããååã ãšæããŸãããïŒ
äœãé¢çœããã®ãèŠãããå Žåã¯ããã°ã¬ãã«ã4ã«å€æŽããŸãã
ããªããæ°ã¥ããããšãå ±æã§ããŸããïŒ
loglevelã9ã«èšå®ããŸããããã以äžèå³æ·±ããã®ã¯ãªãããã§ãã以äžã®ãã°ã¯ã«ãŒãããŠããŸãã
I0601 01:45:05.039373 1 generic_scheduler.go:290] Preemption will not help schedule pod kube-system/coredns-vc5ws on any node.
I0601 01:45:05.039437 1 factory.go:462] Unable to schedule kube-system/coredns-vc5ws: no fit: 0/8 nodes are available: 7 node(s) didn't match node selector.; waiting
I0601 01:45:05.039494 1 scheduler.go:776] Updating pod condition for kube-system/coredns-vc5ws to (PodScheduled==False, Reason=Unschedulable)
ãããç§ã¯åãè¡ä»¥äžã®ãã®ãèŠãããšãã§ããŸããã§ãã
no fit: 0/8 nodes are available: 7 node(s) didn't match node selector.; waiting
å¥åŠãªããšã«ã httpsïŒ//github.com/kubernetes/kubernetes/issues/91340ã§å ±åãããŠããåé¡ã®ããã«ããã°ã¡ãã»ãŒãžã«ã¯7ã€ã®ããŒãã®çµæã®ã¿ã衚瀺ãã
/ cc @damemi
@ ahg-gããã¯ç§ãããã§å ±åããã®ãšåãåé¡ã®ããã«èŠããŸãããšã©ãŒãåžžã«å ±åãããšã¯éããªããã£ã«ã¿ãŒãã©ã°ã€ã³ãããããæšæž¬ããªããã°ãªããªãå Žåã«ãµã€ã¬ã³ãã«å€±æããä»ã®æ¡ä»¶ãããããã§ã
ç§ã®åé¡ã§ã¯ãã¹ã±ãžã¥ãŒã©ãŒãåèµ·åãããšãããä¿®æ£ãããããšã«æ³šæããŠãã ããïŒãã®ã¹ã¬ããã§ãèšåãããŠããããã«https://github.com/kubernetes/kubernetes/issues/91601#issuecomment-636360092ïŒ
ç§ãããŒã¢ã³ã»ããã«é¢ãããã®ã ã£ãã®ã§ãããã¯éè€ããŠãããšæããŸãã ãã®å Žåã¯ããããéããŠhttps://github.com/kubernetes/kubernetes/issues/91340ã§ãã£ã¹ã«ãã·ã§ã³ãç¶ããããšãã§ã
ãšã«ãããã¹ã±ãžã¥ãŒã©ãŒã¯ãã詳现ãªãã°ãªãã·ã§ã³ãå¿ èŠãšããŸãããããäœããããã«ã€ããŠã®ãã°ããªãå Žåããããã®åé¡ããããã°ããããšã¯äžå¯èœã§ãã
@zetaab +1ã®å Žåãã¹ã±ãžã¥ãŒã©ãŒã¯çŸåšã®ãã®ã³ã°æ©èœãå€§å¹ ã«æ¹åããããšãã§ããŸãã ããã¯ç§ããã°ããåãçµãã€ããã ã£ãã¢ããã°ã¬ãŒãã§ãããã€ãã«ããã§åé¡ãéããŸããïŒ https ïŒ
/å²åœ
ç§ã¯ããã調ã¹ãŠããŸãã ã±ãŒã¹ãçµã蟌ãã®ã«åœ¹ç«ã€ããã€ãã®è³ªåã ãŸã åçŸã§ããŠããŸããã
ããŒãã¯ããŒã¢ã³ã»ããã®åã«äœæãããŸããã
ããã©ã«ãã®ãããã¡ã€ã«ã䜿çšãããšããŸããããã©ã®ãããã¡ã€ã«ãæå³ããã©ã®ããã«ç¢ºèªããã®ã§ããïŒ
ãšã¯ã¹ãã³ããŒã¯ãããŸããã
command:
- /usr/local/bin/kube-scheduler
- --address=127.0.0.1
- --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig
- --profiling=false
- --v=1
圱é¿ãäžããå¯èœæ§ã®ãããã1ã€ã®ããšã¯ããã£ã¹ã¯ã®ããã©ãŒãã³ã¹ãetcdã«ãšã£ãŠããŸãè¯ããªãããšã§ãããŸããetcdã¯åäœãé ããšäžå¹³ãèšããŸãã
ã¯ãããããã®ãã©ã°ã«ããââãã¹ã±ãžã¥ãŒã©ãŒã¯ããã©ã«ãã®ãããã¡ã€ã«ã§å®è¡ãããŸãã æ¢ãç¶ããŸãã ãŸã åçŸã§ããŸããã§ããã
ãŸã äœããããŸãã...ããªãã䜿çšããŠããä»ã«åœ±é¿ãäžããå¯èœæ§ããããšæããã®ã¯ãããŸããïŒ æ±æãããŒãããã®ä»ã®ãªãœãŒã¹ïŒ
ããã«é¢é£ããŠããã€ãã®è©Šã¿ãããŸããã åé¡ãçºçããŠããå Žåã§ããããããããŒãã«ã¹ã±ãžã¥ãŒã«ã§ããŸãïŒå®çŸ©ãªããŸãã¯ãnodeNameãã»ã¬ã¯ã¿ãŒã䜿çšïŒã
Affinity / Antiaffinityã䜿çšããããšãããšããããã¯ããŒãã«ã¹ã±ãžã¥ãŒã«ãããŸããã
åé¡ãçºçããŠãããšãã«æ©èœããïŒ
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
nodeName: master-zone-3-1-1-test-cluster-k8s-local
containers:
- image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
åæã«æ©èœããªãïŒ
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master-zone-3-1-1-test-cluster-k8s-local
containers:
- image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
ãŸããåŸè ããã§ãã¯ãããšãããããéåžžã«èå³æ·±ããã®ã§ããã
Warning FailedScheduling 4m37s (x17 over 26m) default-scheduler 0/9 nodes are available: 8 node(s) didn't match node selector.
Warning FailedScheduling 97s (x6 over 3m39s) default-scheduler 0/8 nodes are available: 8 node(s) didn't match node selector.
Warning FailedScheduling 53s default-scheduler 0/8 nodes are available: 8 node(s) didn't match node selector.
Warning FailedScheduling 7s (x5 over 32s) default-scheduler 0/9 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 7 node(s) didn't match node selector.
ãnodeNameãã¯ã»ã¬ã¯ã¿ãŒã§ã¯ãããŸããã nodeNameã䜿çšãããšãã¹ã±ãžã¥ãŒãªã³ã°ããã€ãã¹ãããŸãã
4çªç®ã¯ãããŒãã埩æ§ãããšãã«çºçããŸããã åé¡ãçºçããããŒãã¯ãã¹ã¿ãŒã§ãã£ããããããŒãã¯ããã«ç§»åããŸããã§ããïŒãã ãã以åã®3ã€ã®ã€ãã³ãã§ããŒããèŠã€ãããªãã£ãããšã瀺ããŠããŸãïŒã 4çªç®ã®ã€ãã³ãã§èå³æ·±ãã®ã¯ã1ã€ã®ããŒãããã®æ å ±ããŸã æ¬ èœããŠããããšã§ãã ã€ãã³ãã§ã¯ã0/9ããŒãã䜿çšå¯èœã§ãããšç€ºãããŠããŸããã説æã¯8ããã®ã¿ã§ãã
ããããæ¬ èœããŠããããŒãã§ã¹ã±ãžã¥ãŒã«ãããã¹ãã§ã¯ãªãã£ãçç±ã¯ãããããã¹ã¿ãŒã§ãã£ãããã ãšèšã£ãŠããŸããïŒ
8 node(s) didn't match node selector
ã7ã«ãªã£ãŠããã®ãããããŸãããã®æç¹ã§åé€ãããããŒãã¯ãªããšæããŸãããïŒ
ãnodeNameãã¯ã»ã¬ã¯ã¿ãŒã§ã¯ãããŸããã nodeNameã䜿çšãããšãã¹ã±ãžã¥ãŒãªã³ã°ããã€ãã¹ãããŸãã
ãNodeNameãã®è©Šã¿ã¯highlighã§ããããã®ããŒãã¯äœ¿çšå¯èœã§ãããå¿ èŠã«å¿ããŠããããããã«å°éããŸãã ã€ãŸããããŒããããããèµ·åã§ããªãããšã§ã¯ãããŸããã
4çªç®ã¯ãããŒãã埩æ§ãããšãã«çºçããŸããã åé¡ãçºçããããŒãã¯ãã¹ã¿ãŒã§ãã£ããããããŒãã¯ããã«ç§»åããŸããã§ããïŒãã ãã以åã®3ã€ã®ã€ãã³ãã§ããŒããèŠã€ãããªãã£ãããšã瀺ããŠããŸãïŒã 4çªç®ã®ã€ãã³ãã§èå³æ·±ãã®ã¯ã1ã€ã®ããŒãããã®æ å ±ããŸã æ¬ èœããŠããããšã§ãã ã€ãã³ãã§ã¯ã0/9ããŒãã䜿çšå¯èœã§ãããšç€ºãããŠããŸããã説æã¯8ããã®ã¿ã§ãã
ããããæ¬ èœããŠããããŒãã§ã¹ã±ãžã¥ãŒã«ãããã¹ãã§ã¯ãªãã£ãçç±ã¯ãããããã¹ã¿ãŒã§ãã£ãããã ãšèšã£ãŠããŸããïŒ
8 node(s) didn't match node selector
ã7ã«ãªã£ãŠããã®ãããããŸãããã®æç¹ã§åé€ãããããŒãã¯ãªããšæããŸãããïŒ
ãã¹ãã¯ã©ã¹ã¿ãŒã«ã¯9ã€ã®ããŒãããããŸãã 3人ã®ãã¹ã¿ãŒãš6人ã®åŽåè
ã åäœããŠããªãããŒããæ£åžžã«éå§ãããåã«ãã€ãã³ãã¯äœ¿çšå¯èœãªãã¹ãŠã®ããŒãã«é¢ããæ
å ±ãéç¥ããŸããïŒ 0/8 nodes are available: 8 node(s) didn't match node selector.
ã ããããããŒãã»ã¬ã¯ã¿ãŒãšäžèŽããããŒããèµ·åãããšãã€ãã³ãã¯0/9 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 7 node(s) didn't match node selector.
ãéç¥ããŸããã説æã¯ãäžèŽããªã8ã€ãããããšã瀺ããŠããŸããã9çªç®ïŒåã®ã€ãã³ãã§ç¢ºèªæžã¿ïŒã«ã€ããŠã¯äœãéç¥ããŠããŸããã
ãããã£ãŠãã€ãã³ãã®ç¶æ ïŒ
æåŸã«ããã¹ããããã¯æ±æã®ããã«äžèŽããããŒãã§éå§ãããŸããã§ããããããã¯å¥ã®è©±ã§ãïŒãããŠãæåã®ã€ãã³ãã§ãã§ã«ããã§ãã£ãã¯ãã§ãïŒã
ãNodeNameãã®è©Šã¿ã¯highlighã§ããããã®ããŒãã¯äœ¿çšå¯èœã§ãããå¿ èŠã«å¿ããŠããããããã«å°éããŸãã ã€ãŸããããŒããããããèµ·åã§ããªãããšã§ã¯ãããŸããã
ããŒãã®ãªãŒããŒã³ããããé²ããã®ã¯äœããããŸããããã¹ã±ãžã¥ãŒã©ãŒã§ããããšã«æ³šæããŠãã ããã ãããã£ãŠãããã¯å®éã«ã¯ããŸã衚瀺ãããŸããã
æåŸã«ããã¹ããããã¯æ±æã®ããã«äžèŽããããŒãã§éå§ãããŸããã§ããããããã¯å¥ã®è©±ã§ãïŒãããŠãæåã®ã€ãã³ãã§ãã§ã«ããã§ãã£ãã¯ãã§ãïŒã
ç§ã®è³ªåã¯ã9çªç®ã®ããŒããæåããæ±æãããŠããã®ããšããããšã§ãã ïŒ1ïŒç¶æ ã«å°éããããã®åçŸå¯èœãªæé ããŸãã¯ïŒ2ïŒãã°ãçºçããå¯èœæ§ã®ããå Žæãæ¢ããŠããŸãã
ç§ã®è³ªåã¯ã9çªç®ã®ããŒããæåããæ±æãããŠããã®ããšããããšã§ãã ïŒ1ïŒç¶æ ã«å°éããããã®åçŸå¯èœãªæé ããŸãã¯ïŒ2ïŒãã°ãçºçããå¯èœæ§ã®ããå Žæãæ¢ããŠããŸãã
ã¯ãããã®å Žåãéåä¿¡ããŒãããã¹ã¿ãŒã§ãã£ããããæ±æãåžžã«ååšããŠããŸããã ãããããã¹ã¿ãŒãšã¯ãŒã«ãŒã®äž¡æ¹ã§åãåé¡ãçºçããŠããŸãã
åé¡ãã©ãããæ¥ãŠããã®ãã¯ãŸã ããããŸãããå°ãªããšãããŒãã®åäœæãšããŒãã®åèµ·åã«ãã£ãŠåé¡ãä¿®æ£ãããŠããããã§ãã ãããããããã¯ç©äºãä¿®æ£ããããã®å°ããé£ãããæ¹æ³ã§ãã
ãã³ã°ã·ã§ããã§ãããããäžåºŠééããå Žåã¯...ããŒãã«æå®ããããããã衚瀺ãããªããã©ããã確èªã§ããŸããïŒ
èããããã·ããªãªãèããŠã質åãæçš¿ããŠããŸãã
* Do you have other master nodes in your cluster?
ãã¹ãŠã®clusersã«ã¯3ã€ã®ãã¹ã¿ãŒããããŸãïŒãããã£ãŠããããã®åèµ·åã¯ç°¡åã§ãïŒ
* Do you have extenders?
çªå·ã
ä»æ¥æ°ä»ããèå³æ·±ãç¹ã®1ã€ã¯ã1ã€ã®ãã¹ã¿ãŒãDaemonSetããããããåä¿¡ããŠââããªãã¯ã©ã¹ã¿ãŒããã£ãããšã§ãã ChaosMonkeyã䜿çšãããŠãããã¯ãŒã«ãŒããŒãã®1ã€ãçµäºããŸããã ããã¯èå³æ·±ãããšã§ããããã«ããããããã¯ä»¥åã«åä¿¡ããŠããªãã£ããã¹ã¿ãŒã«ç§»åããŸããã ãããã£ãŠãåé¡ã®ããããŒã以å€ã®ããŒããåé€ããããšã§ããã®æç¹ã§åé¡ãä¿®æ£ãããŠããããã«èŠããŸããã
ãã®ãä¿®æ£ãã®ããã«ãæå®ããããããã«ã€ããŠåçã§ããããã«ãªãã«ã¯ãåé¡ãåçºããã®ãåŸ ã€å¿ èŠããããŸãã
ç§ã¯ä»æ··ä¹±ããŠããŸã...ããªãã®ããŒã¢ã³ã»ããã¯ãã¹ã¿ãŒããŒãã®æ±æã蚱容ããŸããïŒ èšãæããã°...ããªãã«ãšã£ãŠã®ãã°ã¯åãªãã¹ã±ãžã¥ãŒã«ã€ãã³ãã§ããããããšãããããã¹ã±ãžã¥ãŒã«ãããã¹ãã ã£ããšããäºå®ã§ããïŒ
åé¡ã¯ãäžèŽããã¢ãã£ããã£ïŒãŸãã¯ã¢ã³ãã¢ãã£ããã£ïŒèšå®ãå°ãªããšã1ã€ããå Žåã§ãããã®ããŒããã¹ã±ãžã¥ãŒã©ã«ãã£ãŠæ€åºãããªãããšã§ãã
ãã®ãããæ±æãšã©ãŒãäºæ³ãããæåã®ã€ãã³ãã§ãã§ã«ååšããŠããã¯ãã ãšèšããŸããïŒæ±æã¯èŠªåæ§åºæºã®äžéšã§ã¯ãªãããïŒ
ç解ããŸããã ç§ã¯ããªãã®ã»ããã¢ããã確èªããŠãäœãã足ããªãããšã確èªããããšããŠããŸããã
ããŒããã¹ã±ãžã¥ãŒã©ãŒã«ãã£ãŠãèŠããªãããšã¯æããªãã 0/9 nodes are available
ã衚瀺ãããŠããå ŽåãããŒãã¯å®éã«ãã£ãã·ã¥å
ã«ãããšçµè«ä»ããããšãã§ããŸãã äºå®å€ã®çç±ãã©ããã§å€±ããããããªãã®ãªã®ã§ãã€ãã³ãã«ã¯å«ããŸããã
確ãã«ãåèšæ°ã¯åžžã«å®éã®ããŒãæ°ãšäžèŽããŸãã ãã説æçãªã€ãã³ãããã¹ãããã¹ãŠã®ããŒãã«è¡šç€ºãããããã§ã¯ãããŸããããåè¿°ã®ããã«å¥ã®åé¡ã«ãªãå¯èœæ§ããããŸãã
kube-schedulerã®ãã°ãèŠãããšãã§ããŸããïŒ é¢é£ãããšæããããã®ã¯ãããŸããïŒ
@zetaabã¯ããããªãã£ããšæããŸãã åé¡ãåã³çºçãããšãã«è©Šãããšãã§ããŸãïŒããã³ä»¥åã«å°ããããæåããããããã®ããšïŒ
å¯èœã§ããã°ã誀ã£ãŠåé¡ãä¿®æ£ããå Žåã«åããŠã1.18.5ãå®è¡ããŠãã ããã
ãã以äžãã°ãå¿ èŠãªå Žåã¯ããã¹ãã¯ã©ã¹ã¿ãŒã§ããã確å®ã«åçŸã§ããŸãã
@dilyevskyåçŸæé ãå ±æããŠãã ããã äœãšãããŠã倱æããŠãããã£ã«ã¿ãŒãç¹å®ã§ããŸããïŒ
ããã¯ãdsãããã®ããŒãã®metadata.nameã«ãããªãããã§ã...å¥åŠã§ãã ããããããyamlã§ãïŒ
ãããyamlïŒ
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: "2020-07-09T23:17:53Z"
generateName: cilium-
labels:
controller-revision-hash: 6c94db8bb8
k8s-app: cilium
pod-template-generation: "1"
managedFields:
# managed fields crap
name: cilium-d5n4f
namespace: kube-system
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: cilium
uid: 0f00e8af-eb19-4985-a940-a02fa84fcbc5
resourceVersion: "2840"
selfLink: /api/v1/namespaces/kube-system/pods/cilium-d5n4f
uid: e3f7d566-ee5b-4557-8d1b-f0964cde2f22
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- us-central1-dilyevsky-master-qmwnl
containers:
- args:
- --config-dir=/tmp/cilium/config-map
command:
- cilium-agent
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CILIUM_FLANNEL_MASTER_DEVICE
valueFrom:
configMapKeyRef:
key: flannel-master-device
name: cilium-config
optional: true
- name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
valueFrom:
configMapKeyRef:
key: flannel-uninstall-on-exit
name: cilium-config
optional: true
- name: CILIUM_CLUSTERMESH_CONFIG
value: /var/lib/cilium/clustermesh/
- name: CILIUM_CNI_CHAINING_MODE
valueFrom:
configMapKeyRef:
key: cni-chaining-mode
name: cilium-config
optional: true
- name: CILIUM_CUSTOM_CNI_CONF
valueFrom:
configMapKeyRef:
key: custom-cni-conf
name: cilium-config
optional: true
image: docker.io/cilium/cilium:v1.7.6
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command:
- /cni-install.sh
- --enable-debug=false
preStop:
exec:
command:
- /cni-uninstall.sh
livenessProbe:
exec:
command:
- cilium
- status
- --brief
failureThreshold: 10
initialDelaySeconds: 120
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
name: cilium-agent
readinessProbe:
exec:
command:
- cilium
- status
- --brief
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/cilium
name: cilium-run
- mountPath: /host/opt/cni/bin
name: cni-path
- mountPath: /host/etc/cni/net.d
name: etc-cni-netd
- mountPath: /var/lib/cilium/clustermesh
name: clustermesh-secrets
readOnly: true
- mountPath: /tmp/cilium/config-map
name: cilium-config-path
readOnly: true
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: cilium-token-j74lr
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostNetwork: true
initContainers:
- command:
- /init-container.sh
env:
- name: CILIUM_ALL_STATE
valueFrom:
configMapKeyRef:
key: clean-cilium-state
name: cilium-config
optional: true
- name: CILIUM_BPF_STATE
valueFrom:
configMapKeyRef:
key: clean-cilium-bpf-state
name: cilium-config
optional: true
- name: CILIUM_WAIT_BPF_MOUNT
valueFrom:
configMapKeyRef:
key: wait-bpf-mount
name: cilium-config
optional: true
image: docker.io/cilium/cilium:v1.7.6
imagePullPolicy: IfNotPresent
name: clean-cilium-state
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/cilium
name: cilium-run
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: cilium-token-j74lr
readOnly: true
priority: 2000001000
priorityClassName: system-node-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: cilium
serviceAccountName: cilium
terminationGracePeriodSeconds: 1
tolerations:
- operator: Exists
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/disk-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/pid-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/unschedulable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/network-unavailable
operator: Exists
volumes:
- hostPath:
path: /var/run/cilium
type: DirectoryOrCreate
name: cilium-run
- hostPath:
path: /opt/cni/bin
type: DirectoryOrCreate
name: cni-path
- hostPath:
path: /etc/cni/net.d
type: DirectoryOrCreate
name: etc-cni-netd
- hostPath:
path: /lib/modules
type: ""
name: lib-modules
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
- name: clustermesh-secrets
secret:
defaultMode: 420
optional: true
secretName: cilium-clustermesh
- configMap:
defaultMode: 420
name: cilium-config
name: cilium-config-path
- name: cilium-token-j74lr
secret:
defaultMode: 420
secretName: cilium-token-j74lr
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-07-09T23:17:53Z"
message: '0/6 nodes are available: 5 node(s) didn''t match node selector.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: BestEffort
ãããåçŸããæ¹æ³ã¯ã3ã€ã®ãã¹ã¿ãŒãš3ã€ã®ã¯ãŒã«ãŒããŒãã§æ°ããã¯ã©ã¹ã¿ãŒãèµ·åãïŒã¯ã©ã¹ã¿ãŒAPIã䜿çšïŒãCilium1.7.6ãé©çšããããšã§ãã
ç¹æ¯yamlïŒ
---
# Source: cilium/charts/agent/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cilium
namespace: kube-system
---
# Source: cilium/charts/operator/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cilium-operator
namespace: kube-system
---
# Source: cilium/charts/config/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-config
namespace: kube-system
data:
# Identity allocation mode selects how identities are shared between cilium
# nodes by setting how they are stored. The options are "crd" or "kvstore".
# - "crd" stores identities in kubernetes as CRDs (custom resource definition).
# These can be queried with:
# kubectl get ciliumid
# - "kvstore" stores identities in a kvstore, etcd or consul, that is
# configured below. Cilium versions before 1.6 supported only the kvstore
# backend. Upgrades from these older cilium versions should continue using
# the kvstore by commenting out the identity-allocation-mode below, or
# setting it to "kvstore".
identity-allocation-mode: crd
# If you want to run cilium in debug mode change this value to true
debug: "false"
# Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
# address.
enable-ipv4: "true"
# Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
# address.
enable-ipv6: "false"
# If you want cilium monitor to aggregate tracing for packets, set this level
# to "low", "medium", or "maximum". The higher the level, the less packets
# that will be seen in monitor output.
monitor-aggregation: medium
# The monitor aggregation interval governs the typical time between monitor
# notification events for each allowed connection.
#
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-interval: 5s
# The monitor aggregation flags determine which TCP flags which, upon the
# first observation, cause monitor notifications to be generated.
#
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-flags: all
# ct-global-max-entries-* specifies the maximum number of connections
# supported across all endpoints, split by protocol: tcp or other. One pair
# of maps uses these values for IPv4 connections, and another pair of maps
# use these values for IPv6 connections.
#
# If these values are modified, then during the next Cilium startup the
# tracking of ongoing connections may be disrupted. This may lead to brief
# policy drops or a change in loadbalancing decisions for a connection.
#
# For users upgrading from Cilium 1.2 or earlier, to minimize disruption
# during the upgrade process, comment out these options.
bpf-ct-global-tcp-max: "524288"
bpf-ct-global-any-max: "262144"
# bpf-policy-map-max specified the maximum number of entries in endpoint
# policy map (per endpoint)
bpf-policy-map-max: "16384"
# Pre-allocation of map entries allows per-packet latency to be reduced, at
# the expense of up-front memory allocation for the entries in the maps. The
# default value below will minimize memory usage in the default installation;
# users who are sensitive to latency may consider setting this to "true".
#
# This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
# this option and behave as though it is set to "true".
#
# If this value is modified, then during the next Cilium startup the restore
# of existing endpoints and tracking of ongoing connections may be disrupted.
# This may lead to policy drops or a change in loadbalancing decisions for a
# connection for some time. Endpoints may need to be recreated to restore
# connectivity.
#
# If this option is set to "false" during an upgrade from 1.3 or earlier to
# 1.4 or later, then it may cause one-time disruptions during the upgrade.
preallocate-bpf-maps: "false"
# Regular expression matching compatible Istio sidecar istio-proxy
# container image names
sidecar-istio-proxy-image: "cilium/istio_proxy"
# Encapsulation mode for communication between nodes
# Possible values:
# - disabled
# - vxlan (default)
# - geneve
tunnel: vxlan
# Name of the cluster. Only relevant when building a mesh of clusters.
cluster-name: default
# DNS Polling periodically issues a DNS lookup for each `matchName` from
# cilium-agent. The result is used to regenerate endpoint policy.
# DNS lookups are repeated with an interval of 5 seconds, and are made for
# A(IPv4) and AAAA(IPv6) addresses. Should a lookup fail, the most recent IP
# data is used instead. An IP change will trigger a regeneration of the Cilium
# policy for each endpoint and increment the per cilium-agent policy
# repository revision.
#
# This option is disabled by default starting from version 1.4.x in favor
# of a more powerful DNS proxy-based implementation, see [0] for details.
# Enable this option if you want to use FQDN policies but do not want to use
# the DNS proxy.
#
# To ease upgrade, users may opt to set this option to "true".
# Otherwise please refer to the Upgrade Guide [1] which explains how to
# prepare policy rules for upgrade.
#
# [0] http://docs.cilium.io/en/stable/policy/language/#dns-based
# [1] http://docs.cilium.io/en/stable/install/upgrade/#changes-that-may-require-action
tofqdns-enable-poller: "false"
# wait-bpf-mount makes init container wait until bpf filesystem is mounted
wait-bpf-mount: "false"
masquerade: "true"
enable-xt-socket-fallback: "true"
install-iptables-rules: "true"
auto-direct-node-routes: "false"
kube-proxy-replacement: "probe"
enable-host-reachable-services: "false"
enable-external-ips: "false"
enable-node-port: "false"
node-port-bind-protection: "true"
enable-auto-protect-node-port-range: "true"
enable-endpoint-health-checking: "true"
enable-well-known-identities: "false"
enable-remote-node-identity: "true"
---
# Source: cilium/charts/agent/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cilium
rules:
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
- services
- nodes
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
- update
- apiGroups:
- ""
resources:
- nodes
- nodes/status
verbs:
- patch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- get
- list
- watch
- update
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies
- ciliumnetworkpolicies/status
- ciliumclusterwidenetworkpolicies
- ciliumclusterwidenetworkpolicies/status
- ciliumendpoints
- ciliumendpoints/status
- ciliumnodes
- ciliumnodes/status
- ciliumidentities
- ciliumidentities/status
verbs:
- '*'
---
# Source: cilium/charts/operator/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cilium-operator
rules:
- apiGroups:
- ""
resources:
# to automatically delete [core|kube]dns pods so that are starting to being
# managed by Cilium
- pods
verbs:
- get
- list
- watch
- delete
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
# to automatically read from k8s and import the node's pod CIDR to cilium's
# etcd so all nodes know how to reach another pod running in in a different
# node.
- nodes
# to perform the translation of a CNP that contains `ToGroup` to its endpoints
- services
- endpoints
# to check apiserver connectivity
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies
- ciliumnetworkpolicies/status
- ciliumclusterwidenetworkpolicies
- ciliumclusterwidenetworkpolicies/status
- ciliumendpoints
- ciliumendpoints/status
- ciliumnodes
- ciliumnodes/status
- ciliumidentities
- ciliumidentities/status
verbs:
- '*'
---
# Source: cilium/charts/agent/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cilium
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cilium
subjects:
- kind: ServiceAccount
name: cilium
namespace: kube-system
---
# Source: cilium/charts/operator/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cilium-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cilium-operator
subjects:
- kind: ServiceAccount
name: cilium-operator
namespace: kube-system
---
# Source: cilium/charts/agent/templates/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
k8s-app: cilium
name: cilium
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: cilium
template:
metadata:
annotations:
# This annotation plus the CriticalAddonsOnly toleration makes
# cilium to be a critical pod in the cluster, which ensures cilium
# gets priority scheduling.
# https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
k8s-app: cilium
spec:
containers:
- args:
- --config-dir=/tmp/cilium/config-map
command:
- cilium-agent
livenessProbe:
exec:
command:
- cilium
- status
- --brief
failureThreshold: 10
# The initial delay for the liveness probe is intentionally large to
# avoid an endless kill & restart cycle if in the event that the initial
# bootstrapping takes longer than expected.
initialDelaySeconds: 120
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
exec:
command:
- cilium
- status
- --brief
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CILIUM_FLANNEL_MASTER_DEVICE
valueFrom:
configMapKeyRef:
key: flannel-master-device
name: cilium-config
optional: true
- name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
valueFrom:
configMapKeyRef:
key: flannel-uninstall-on-exit
name: cilium-config
optional: true
- name: CILIUM_CLUSTERMESH_CONFIG
value: /var/lib/cilium/clustermesh/
- name: CILIUM_CNI_CHAINING_MODE
valueFrom:
configMapKeyRef:
key: cni-chaining-mode
name: cilium-config
optional: true
- name: CILIUM_CUSTOM_CNI_CONF
valueFrom:
configMapKeyRef:
key: custom-cni-conf
name: cilium-config
optional: true
image: "docker.io/cilium/cilium:v1.7.6"
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command:
- "/cni-install.sh"
- "--enable-debug=false"
preStop:
exec:
command:
- /cni-uninstall.sh
name: cilium-agent
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
privileged: true
volumeMounts:
- mountPath: /var/run/cilium
name: cilium-run
- mountPath: /host/opt/cni/bin
name: cni-path
- mountPath: /host/etc/cni/net.d
name: etc-cni-netd
- mountPath: /var/lib/cilium/clustermesh
name: clustermesh-secrets
readOnly: true
- mountPath: /tmp/cilium/config-map
name: cilium-config-path
readOnly: true
# Needed to be able to load kernel modules
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
hostNetwork: true
initContainers:
- command:
- /init-container.sh
env:
- name: CILIUM_ALL_STATE
valueFrom:
configMapKeyRef:
key: clean-cilium-state
name: cilium-config
optional: true
- name: CILIUM_BPF_STATE
valueFrom:
configMapKeyRef:
key: clean-cilium-bpf-state
name: cilium-config
optional: true
- name: CILIUM_WAIT_BPF_MOUNT
valueFrom:
configMapKeyRef:
key: wait-bpf-mount
name: cilium-config
optional: true
image: "docker.io/cilium/cilium:v1.7.6"
imagePullPolicy: IfNotPresent
name: clean-cilium-state
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
volumeMounts:
- mountPath: /var/run/cilium
name: cilium-run
restartPolicy: Always
priorityClassName: system-node-critical
serviceAccount: cilium
serviceAccountName: cilium
terminationGracePeriodSeconds: 1
tolerations:
- operator: Exists
volumes:
# To keep state between restarts / upgrades
- hostPath:
path: /var/run/cilium
type: DirectoryOrCreate
name: cilium-run
# To install cilium cni plugin in the host
- hostPath:
path: /opt/cni/bin
type: DirectoryOrCreate
name: cni-path
# To install cilium cni configuration in the host
- hostPath:
path: /etc/cni/net.d
type: DirectoryOrCreate
name: etc-cni-netd
# To be able to load kernel modules
- hostPath:
path: /lib/modules
name: lib-modules
# To access iptables concurrently with other processes (e.g. kube-proxy)
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
# To read the clustermesh configuration
- name: clustermesh-secrets
secret:
defaultMode: 420
optional: true
secretName: cilium-clustermesh
# To read the configuration from the config map
- configMap:
name: cilium-config
name: cilium-config-path
updateStrategy:
rollingUpdate:
maxUnavailable: 2
type: RollingUpdate
---
# Source: cilium/charts/operator/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.cilium/app: operator
name: cilium-operator
name: cilium-operator
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
io.cilium/app: operator
name: cilium-operator
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
labels:
io.cilium/app: operator
name: cilium-operator
spec:
containers:
- args:
- --debug=$(CILIUM_DEBUG)
- --identity-allocation-mode=$(CILIUM_IDENTITY_ALLOCATION_MODE)
- --synchronize-k8s-nodes=true
command:
- cilium-operator
env:
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_DEBUG
valueFrom:
configMapKeyRef:
key: debug
name: cilium-config
optional: true
- name: CILIUM_CLUSTER_NAME
valueFrom:
configMapKeyRef:
key: cluster-name
name: cilium-config
optional: true
- name: CILIUM_CLUSTER_ID
valueFrom:
configMapKeyRef:
key: cluster-id
name: cilium-config
optional: true
- name: CILIUM_IPAM
valueFrom:
configMapKeyRef:
key: ipam
name: cilium-config
optional: true
- name: CILIUM_DISABLE_ENDPOINT_CRD
valueFrom:
configMapKeyRef:
key: disable-endpoint-crd
name: cilium-config
optional: true
- name: CILIUM_KVSTORE
valueFrom:
configMapKeyRef:
key: kvstore
name: cilium-config
optional: true
- name: CILIUM_KVSTORE_OPT
valueFrom:
configMapKeyRef:
key: kvstore-opt
name: cilium-config
optional: true
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: AWS_ACCESS_KEY_ID
name: cilium-aws
optional: true
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: AWS_SECRET_ACCESS_KEY
name: cilium-aws
optional: true
- name: AWS_DEFAULT_REGION
valueFrom:
secretKeyRef:
key: AWS_DEFAULT_REGION
name: cilium-aws
optional: true
- name: CILIUM_IDENTITY_ALLOCATION_MODE
valueFrom:
configMapKeyRef:
key: identity-allocation-mode
name: cilium-config
optional: true
image: "docker.io/cilium/operator:v1.7.6"
imagePullPolicy: IfNotPresent
name: cilium-operator
livenessProbe:
httpGet:
host: '127.0.0.1'
path: /healthz
port: 9234
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 3
hostNetwork: true
restartPolicy: Always
serviceAccount: cilium-operator
serviceAccountName: cilium-operator
ã¹ã±ãžã¥ãŒã©ãã°ã¯æ¬¡ã®ãšããã§ãã
I0709 23:08:22.055830 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0709 23:08:22.056081 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0709 23:08:23.137451 1 serving.go:313] Generated self-signed cert in-memory
W0709 23:08:33.843509 1 authentication.go:297] Error looking up in-cluster authentication configuration: etcdserver: request timed out
W0709 23:08:33.843671 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0709 23:08:33.843710 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0709 23:08:33.911805 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0709 23:08:33.911989 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0709 23:08:33.917999 1 authorization.go:47] Authorization is disabled
W0709 23:08:33.918162 1 authentication.go:40] Authentication is disabled
I0709 23:08:33.918238 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0709 23:08:33.925860 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0709 23:08:33.926013 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0709 23:08:33.930685 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0709 23:08:33.936198 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0709 23:08:34.026382 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0709 23:08:34.036998 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0709 23:08:50.597201 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0709 23:08:50.658551 1 factory.go:503] pod: kube-system/coredns-66bff467f8-9rjvd is already present in the active queue
E0709 23:12:27.673854 1 factory.go:503] pod kube-system/cilium-vv466 is already present in the backoff queue
E0709 23:12:58.099432 1 leaderelection.go:320] error retrieving resource lock kube-system/kube-scheduler: etcdserver: leader changed
ã¹ã±ãžã¥ãŒã©ããããåèµ·åãããšãä¿çäžã®ãããã¯ããã«ã¹ã±ãžã¥ãŒã«ãèšå®ããŸãã
ã©ã®ãããã€ãã³ããååŸããŸããïŒ ããŒãã«æ±æããããã©ããç¥ã£ãŠããŸãã
ã¹ã±ãžã¥ãŒã«ãããŠããªããšããã¯ïŒ ãã¹ã¿ãŒããŒããŸãã¯ããããã§ã®ã¿å€±æããŸãã
ããŒãïŒ ããŒãã«ååãªã¹ããŒã¹ããããŸããïŒ
2020幎7æ9æ¥æšææ¥ãååŸ7æ49ådilyevskyã notifications @ github.com
æžããŸããïŒ
ããã¯ãdsãããã®ããŒãã®metadata.nameã«ãããªãããã§ã...
å€ã ã ããããããyamlã§ãïŒapiVersionïŒv1kindïŒPodmetadataïŒ
泚éïŒ
Scheduler.alpha.kubernetes.io/critical-podïŒ ""
CreationTimestampïŒ "2020-07-09T23ïŒ17ïŒ53Z"
generateNameïŒç¹æ¯-
ã©ãã«ïŒ
controller-revision-hashïŒ6c94db8bb8
k8s-appïŒç¹æ¯
pod-template-generationïŒ "1"
managedFieldsïŒ
ïŒç®¡çãã£ãŒã«ããããã
ååïŒç¹æ¯-d5n4f
åå空éïŒkube-system
ownerReferencesïŒ
- apiVersionïŒapps / v1
blockOwnerDeletionïŒtrue
ã³ã³ãããŒã©ãŒïŒtrue
çš®é¡ïŒDaemonSet
ååïŒç¹æ¯
uidïŒ0f00e8af-eb19-4985-a940-a02fa84fcbc5
resourceVersionïŒ "2840"
selfLinkïŒ/ api / v1 / namespaces / kube-system / pods / cilium-d5n4f
uidïŒe3f7d566-ee5b-4557-8d1b-f0964cde2f22specïŒ
芪åæ§ïŒ
nodeAffinityïŒ
requiredDuringSchedulingIgnoredDuringExecutionïŒ
nodeSelectorTermsïŒ
--matchFieldsïŒ
-ããŒïŒmetadata.name
æŒç®åïŒã§
å€ïŒ
--us-central1-dilyevsky-master-qmwnl
ã³ã³ããïŒ- åŒæ°ïŒ
- --config-dir = / tmp / cilium / config-map
ã³ãã³ãïŒ- ç¹æ¯å€
envïŒ- ååïŒK8S_NODE_NAME
valueFromïŒ
fieldRefïŒ
apiVersionïŒv1
fieldPathïŒspec.nodeName- ååïŒCILIUM_K8S_NAMESPACE
valueFromïŒ
fieldRefïŒ
apiVersionïŒv1
fieldPathïŒmetadata.namespace- ååïŒCILIUM_FLANNEL_MASTER_DEVICE
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒãã©ã³ãã«-ãã¹ã¿ãŒ-ããã€ã¹
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_FLANNEL_UNINSTALL_ON_EXIT
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒflannel-uninstall-on-exit
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_CLUSTERMESH_CONFIG
å€ïŒ/ var / lib / cilium / clustermesh /- ååïŒCILIUM_CNI_CHAINING_MODE
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒcni-chaining-mode
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_CUSTOM_CNI_CONF
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒcustom-cni-conf
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue
ç»åïŒdocker.io/cilium/ cilium ïŒv1.7.6
imagePullPolicyïŒIfNotPresent
ã©ã€ããµã€ã¯ã«ïŒ
postStartïŒ
execïŒ
ã³ãã³ãïŒ
- /cni-install.sh
- --enable-debug = false
preStopïŒ
execïŒ
ã³ãã³ãïŒ- /cni-uninstall.sh
livenessProbeïŒ
execïŒ
ã³ãã³ãïŒ
- ç¹æ¯
- ç¶æ
- -ç°¡åãªèª¬æ
failureThresholdïŒ10
initialDelaySecondsïŒ120
periodSecondsïŒ30
successThresholdïŒ1
timeoutSecondsïŒ5
ååïŒç¹æ¯å€
readinessProbeïŒ
execïŒ
ã³ãã³ãïŒ- ç¹æ¯
- ç¶æ
- -ç°¡åãªèª¬æ
failureThresholdïŒ3
initialDelaySecondsïŒ5
periodSecondsïŒ30
successThresholdïŒ1
timeoutSecondsïŒ5
ãªãœãŒã¹ïŒ{}
securityContextïŒ
æ©èœïŒ
è¿œå ïŒ- NET_ADMIN
- SYS_MODULE
ç¹æš©ïŒtrue
ã¿ãŒãããŒã·ã§ã³ã¡ãã»ãŒãžãã¹ïŒ/ dev / termination-log
terminalMessagePolicyïŒãã¡ã€ã«
volumeMountsïŒ- mountPathïŒ/ var / run / cilium
ååïŒç¹æ¯-run- mountPathïŒ/ host / opt / cni / bin
ååïŒcni-path- mountPathïŒ/host/etc/cni/net.d
ååïŒetc-cni-netd- mountPathïŒ/ var / lib / cilium / clustermesh
ååïŒclustermesh-secrets
readOnlyïŒtrue- mountPathïŒ/ tmp / cilium / config-map
ååïŒcilium-config-path
readOnlyïŒtrue- mountPathïŒ/ lib / modules
ååïŒlib-modules
readOnlyïŒtrue- mountPathïŒ/run/xtables.lock
ååïŒxtables-lock- mountPathïŒ/var/run/secrets/kubernetes.io/serviceaccount
ååïŒç¹æ¯ããŒã¯ã³-j74lr
readOnlyïŒtrue
dnsPolicyïŒClusterFirst
enableServiceLinksïŒtrue
hostNetworkïŒtrue
initContainersïŒ- ã³ãã³ãïŒ
- /init-container.sh
envïŒ- ååïŒCILIUM_ALL_STATE
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒclean-cilium-state
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_BPF_STATE
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒclean-cilium-bpf-state
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_WAIT_BPF_MOUNT
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒwait-bpf-mount
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue
ç»åïŒdocker.io/cilium/ cilium ïŒv1.7.6
imagePullPolicyïŒIfNotPresent
ååïŒclean-cilium-state
ãªãœãŒã¹ïŒ{}
securityContextïŒ
æ©èœïŒ
è¿œå ïŒ
- NET_ADMIN
ç¹æš©ïŒtrue
ã¿ãŒãããŒã·ã§ã³ã¡ãã»ãŒãžãã¹ïŒ/ dev / termination-log
terminalMessagePolicyïŒãã¡ã€ã«
volumeMountsïŒ- mountPathïŒ/ var / run / cilium
ååïŒç¹æ¯-run- mountPathïŒ/var/run/secrets/kubernetes.io/serviceaccount
ååïŒç¹æ¯ããŒã¯ã³-j74lr
readOnlyïŒtrue
åªå 床ïŒ2000001000
priorityClassNameïŒã·ã¹ãã ããŒãã¯ãªãã£ã«ã«
restartPolicyïŒåžžã«
ã¹ã±ãžã¥ãŒã©åïŒããã©ã«ã-ã¹ã±ãžã¥ãŒã©
securityContextïŒ{}
serviceAccountïŒç¹æ¯
serviceAccountNameïŒç¹æ¯
çµäºGracePeriodSecondsïŒ1
蚱容ç¯å²ïŒ- ãªãã¬ãŒã¿ãŒïŒååšããŸã
- å¹æïŒNoExecute
ããŒïŒnode.kubernetes.io/not-ready
ãªãã¬ãŒã¿ãŒïŒååšããŸã- å¹æïŒNoExecute
ããŒïŒnode.kubernetes.io/unreachable
ãªãã¬ãŒã¿ãŒïŒååšããŸã- å¹æïŒNoSchedule
ããŒïŒnode.kubernetes.io/disk-pressure
ãªãã¬ãŒã¿ãŒïŒååšããŸã- å¹æïŒNoSchedule
ããŒïŒnode.kubernetes.io/memory-pressure
ãªãã¬ãŒã¿ãŒïŒååšããŸã- å¹æïŒNoSchedule
ããŒïŒnode.kubernetes.io/pid-pressure
ãªãã¬ãŒã¿ãŒïŒååšããŸã- å¹æïŒNoSchedule
ããŒïŒnode.kubernetes.io/unschedulable
ãªãã¬ãŒã¿ãŒïŒååšããŸã- å¹æïŒNoSchedule
ããŒïŒnode.kubernetes.io/network-å©çšäžå¯
ãªãã¬ãŒã¿ãŒïŒååšããŸã
ããªã¥ãŒã ïŒ- hostPathïŒ
ãã¹ïŒ/ var / run / cilium
ã¿ã€ãïŒDirectoryOrCreate
ååïŒç¹æ¯-run- hostPathïŒ
ãã¹ïŒ/ opt / cni / bin
ã¿ã€ãïŒDirectoryOrCreate
ååïŒcni-path- hostPathïŒ
ãã¹ïŒ/etc/cni/net.d
ã¿ã€ãïŒDirectoryOrCreate
ååïŒetc-cni-netd- hostPathïŒ
ãã¹ïŒ/ lib / modules
ã¿ã€ãïŒ ""
ååïŒlib-modules- hostPathïŒ
ãã¹ïŒ/run/xtables.lock
ã¿ã€ãïŒFileOrCreate
ååïŒxtables-lock- ååïŒclustermesh-secrets
ç§å¯ïŒ
defaultModeïŒ420
ãªãã·ã§ã³ïŒtrue
secretNameïŒcilium-clustermesh- configMapïŒ
defaultModeïŒ420
ååïŒcilium-config
ååïŒcilium-config-path- ååïŒç¹æ¯ããŒã¯ã³-j74lr
ç§å¯ïŒ
defaultModeïŒ420
secretNameïŒcilium-token-j74lrstatusïŒ
æ¡ä»¶ïŒ- lastProbeTimeïŒnull
lastTransitionTimeïŒ "2020-07-09T23ïŒ17ïŒ53Z"
ã¡ãã»ãŒãžïŒã0/6ããŒãã䜿çšå¯èœã§ãïŒ5ããŒããããŒãã»ã¬ã¯ã¿ãŒãšäžèŽããŸããã§ãããã
çç±ïŒäºå®å€
ã¹ããŒã¿ã¹ïŒãFalseã
ã¿ã€ãïŒPodScheduled
ãã§ãŒãºïŒä¿çäž
qosClassïŒBestEffortç§ããããåçŸããæ¹æ³ã¯ã2ã€ã®ãã¹ã¿ãŒãš
3ã€ã®ã¯ãŒã«ãŒããŒãïŒã¯ã©ã¹ã¿ãŒAPIã䜿çšïŒãšCilium 1.7.6ã®é©çšïŒ---ïŒãœãŒã¹ïŒcilium / charts / agent / templates / serviceaccount.yamlapiVersionïŒv1kindïŒServiceAccountmetadataïŒ
ååïŒç¹æ¯
åå空éïŒkube-system
---ïŒãœãŒã¹ïŒcilium / charts / operator / templates / serviceaccount.yamlapiVersionïŒv1kindïŒServiceAccountmetadataïŒ
ååïŒç¹æ¯ãªãã¬ãŒã¿ãŒ
åå空éïŒkube-system
---ïŒãœãŒã¹ïŒcilium / charts / config / templates / configmap.yamlapiVersionïŒv1kindïŒConfigMapmetadataïŒ
ååïŒcilium-config
åå空éïŒkube-systemdataïŒïŒã¢ã€ãã³ãã£ãã£å²ãåœãŠã¢ãŒãã¯ãç¹æ¯éã§ã¢ã€ãã³ãã£ãã£ãå ±æããæ¹æ³ãéžæããŸã
ïŒããŒãã®ä¿åæ¹æ³ãèšå®ããŸãã ãªãã·ã§ã³ã¯ãcrdããŸãã¯ãkvstoreãã§ãã
ïŒ-ãcrdãã¯ãã¢ã€ãã³ãã£ãã£ãCRDïŒã«ã¹ã¿ã ãªãœãŒã¹å®çŸ©ïŒãšããŠkubernetesã«ä¿åããŸãã
ïŒãããã¯æ¬¡ã®ã³ãã³ãã§ç §äŒã§ããŸãã
ïŒkubectl get ciliumid
ïŒ-ãkvstoreãã¯ãkvstoreãªã©ã®etcdãŸãã¯consulã«IDãæ ŒçŽããŸãã
ïŒä»¥äžã§æ§æã 1.6ããåã®CiliumããŒãžã§ã³ã¯kvstoreã®ã¿ããµããŒãããŠããŸãã
ïŒããã¯ãšã³ãã ãããã®å€ãç¹æ¯ããŒãžã§ã³ããã®ã¢ããã°ã¬ãŒãã¯ãåŒãç¶ã䜿çšããå¿ èŠããããŸã
ïŒä»¥äžã®identity-allocation-modeãã³ã¡ã³ãã¢ãŠãããŠãkvstoreããŸãã¯
ïŒãkvstoreãã«èšå®ããŸãã
ã¢ã€ãã³ãã£ãã£å²ãåœãŠã¢ãŒãïŒcrdïŒciliumããããã°ã¢ãŒãã§å®è¡ããå Žåã¯ããã®å€ãtrueã«å€æŽããŸã
ãããã°ïŒãfalseãïŒIPv4ã¢ãã¬ãã·ã³ã°ãæå¹ã«ããŸãã æå¹ã«ãããšããã¹ãŠã®ãšã³ããã€ã³ãã«IPv4ãå²ãåœãŠãããŸã
ïŒ äœæã
enable-ipv4ïŒ "true"ïŒIPv6ã¢ãã¬ãã·ã³ã°ãæå¹ã«ããŸãã æå¹ã«ãããšããã¹ãŠã®ãšã³ããã€ã³ãã«IPv6ãå²ãåœãŠãããŸã
ïŒ äœæã
enable-ipv6ïŒ "false"ïŒç¹æ¯ã¢ãã¿ãŒã§ãã±ããã®ãã¬ãŒã¹ãéçŽããå Žåã¯ããã®ã¬ãã«ãèšå®ããŸã
ïŒãããäœãããäžãããŸãã¯ãæ倧ãã ã¬ãã«ãé«ãã»ã©ããã±ããã¯å°ãªããªããŸãã
ïŒã¢ãã¿ãŒåºåã«è¡šç€ºãããŸãã
ã¢ãã¿ãŒéçŽïŒäžïŒã¢ãã¿ãŒã®éçŽééã¯ãã¢ãã¿ãŒéã®äžè¬çãªæéã管çããŸã
ïŒèš±å¯ãããæ¥ç¶ããšã®éç¥ã€ãã³ãã
ïŒ
ïŒã¢ãã¿ãŒã®éèšããäžã以äžã«èšå®ãããŠããå Žåã«ã®ã¿æå¹ã§ãã
ã¢ãã¿ãŒ-éçŽ-ééïŒ5ç§ïŒã¢ãã¿ãŒéçŽãã©ã°ã¯ãã©ã®TCPãã©ã°ã決å®ããŸãã
ïŒæåã®èŠ³å¯ãã¢ãã¿ãŒéç¥ãçæãããŸãã
ïŒ
ïŒã¢ãã¿ãŒã®éèšããäžã以äžã«èšå®ãããŠããå Žåã«ã®ã¿æå¹ã§ãã
monitor-aggregation-flagsïŒãã¹ãŠïŒct-global-max-entries- *æ¥ç¶ã®æ倧æ°ãæå®ããŸã
ïŒãã¹ãŠã®ãšã³ããã€ã³ãã§ãµããŒãããããããã³ã«ïŒtcpãŸãã¯ãã®ä»ïŒã§åå²ãããŸãã ã¯ã³ãã¢
ãããã®æ°ã¯ãIPv4æ¥ç¶ãããã³å¥ã®ãããã®ãã¢ã«ãããã®å€ã䜿çšããŸã
ïŒãããã®å€ãIPv6æ¥ç¶ã«äœ¿çšããŸãã
ïŒ
ïŒãããã®å€ãå€æŽãããå Žåã次ã®Ciliumã®èµ·åæã«
ïŒé²è¡äžã®æ¥ç¶ã®è¿œè·¡ãäžæãããå¯èœæ§ããããŸãã ããã¯ç°¡åã«ã€ãªããå¯èœæ§ããããŸã
ïŒããªã·ãŒã®åé€ãŸãã¯æ¥ç¶ã®è² è·åæ£ã®æ±ºå®ã®å€æŽã
ïŒ
ïŒCilium 1.2以åããã¢ããã°ã¬ãŒããããŠãŒã¶ãŒã®å Žåãäžæãæå°éã«æãããã
ïŒã¢ããã°ã¬ãŒãããã»ã¹äžã«ããããã®ãªãã·ã§ã³ãã³ã¡ã³ãã¢ãŠãããŸãã
bpf-ct-global-tcp-maxïŒ "524288"
bpf-ct-global-any-maxïŒ "262144"ïŒbpf-policy-map-maxã¯ããšã³ããã€ã³ãã®ãšã³ããªã®æ倧æ°ãæå®ããŸãã
ïŒããªã·ãŒãããïŒãšã³ããã€ã³ãããšïŒ
bpf-policy-map-maxïŒ "16384"ïŒããããšã³ããªã®äºåå²ãåœãŠã«ããããã±ããããšã®é 延ãæžããããšãã§ããŸãã
ïŒãããå ã®ãšã³ããªã®äºåã¡ã¢ãªå²ãåœãŠã®è²»çšã ã¶ã»
ïŒä»¥äžã®ããã©ã«ãå€ã¯ãããã©ã«ãã€ã³ã¹ããŒã«ã§ã®ã¡ã¢ãªäœ¿çšéãæå°éã«æããŸãã
ïŒã¬ã€ãã³ã·ãŒã«ææãªãŠãŒã¶ãŒã¯ãããããtrueãã«èšå®ããããšãæ€èšã§ããŸãã
ïŒ
ïŒãã®ãªãã·ã§ã³ã¯Cilium1.4ã§å°å ¥ãããŸããã Cilium1.3以åã¯ç¡èŠããŸã
ïŒãã®ãªãã·ã§ã³ã¯ããtrueãã«èšå®ãããŠãããã®ããã«åäœããŸãã
ïŒ
ïŒãã®å€ãå€æŽãããå Žåã次ã®Ciliumã®èµ·åæã«åŸ©å
æ¢åã®ãšã³ããã€ã³ãã®æ°ãšé²è¡äžã®æ¥ç¶ã®è¿œè·¡ãäžæãããå¯èœæ§ããããŸãã
ïŒããã«ãããããªã·ãŒãåé€ãããããè² è·åæ£ã®æ±ºå®ãå€æŽããããããå¯èœæ§ããããŸãã
ïŒãã°ããã®éæ¥ç¶ã 埩å ããã«ã¯ããšã³ããã€ã³ãã®åäœæãå¿ èŠã«ãªãå ŽåããããŸã
ïŒæ¥ç¶ã
ïŒ
ïŒ1.3以åãããžã®ã¢ããã°ã¬ãŒãäžã«ãã®ãªãã·ã§ã³ããfalseãã«èšå®ãããŠããå Žå
ïŒ1.4以éã®å Žåãã¢ããã°ã¬ãŒãäžã«1åéãã®äžæãçºçããå¯èœæ§ããããŸãã
preallocate-bpf-mapsïŒ "false"ïŒäºææ§ã®ããIstioãµã€ãã«ãŒistio-proxyã«äžèŽããæ£èŠè¡šçŸ
ïŒã³ã³ããã€ã¡ãŒãžå
sidecar-istio-proxy-imageïŒ "cilium / istio_proxy"ïŒããŒãéã®éä¿¡ã®ããã®ã«ãã»ã«åã¢ãŒã
ïŒå¯èœãªå€ïŒ
ïŒ - ç¡å¹
ïŒ-vxlanïŒããã©ã«ãïŒ
ïŒ-ãžã¥ããŒã
ãã³ãã«ïŒvxlanïŒã¯ã©ã¹ã¿ãŒã®ååã ã¯ã©ã¹ã¿ãŒã®ã¡ãã·ã¥ãæ§ç¯ããå Žåã«ã®ã¿é¢ä¿ããŸãã
ã¯ã©ã¹ã¿ãŒåïŒããã©ã«ãïŒDNSããŒãªã³ã°ã¯ãããã®
matchName
ããšã«DNSã«ãã¯ã¢ãããå®æçã«çºè¡ããŸã
ïŒç¹æ¯å€ã çµæã¯ããšã³ããã€ã³ãããªã·ãŒãåçæããããã«äœ¿çšãããŸãã
ïŒDNSã«ãã¯ã¢ããã¯5ç§ééã§ç¹°ãè¿ããã
ïŒAïŒIPv4ïŒããã³AAAAïŒIPv6ïŒã¢ãã¬ã¹ã ã«ãã¯ã¢ããã倱æããå Žåãææ°ã®IP
ïŒä»£ããã«ããŒã¿ã䜿çšãããŸãã IPã®å€æŽã«ãããç¹æ¯ã®åçãããªã¬ãŒãããŸã
ïŒåãšã³ããã€ã³ãã®ããªã·ãŒãšcilium-agentããšã®ããªã·ãŒãã€ã³ã¯ãªã¡ã³ãããŸã
ïŒãªããžããªã®ãªããžã§ã³ã
ïŒ
ïŒãã®ãªãã·ã§ã³ã¯ãããŒãžã§ã³1.4.x以éãããã©ã«ãã§ç¡å¹ã«ãªã£ãŠããŸãã
ãã匷åãªDNSãããã·ããŒã¹ã®å®è£ ã®ïŒã詳现ã«ã€ããŠã¯ã[0]ãåç §ããŠãã ããã
ïŒFQDNããªã·ãŒã䜿çšããããã䜿çšããããªãå Žåã¯ããã®ãªãã·ã§ã³ãæå¹ã«ããŸã
ïŒDNSãããã·ã
ïŒ
ïŒã¢ããã°ã¬ãŒãã容æã«ããããã«ããŠãŒã¶ãŒã¯ãã®ãªãã·ã§ã³ããtrueãã«èšå®ããããšãéžæã§ããŸãã
ïŒãã以å€ã®å Žåã¯ãã¢ããã°ã¬ãŒãã¬ã€ã[1]ãåç §ããŠãã ããã
ïŒã¢ããã°ã¬ãŒãçšã®ããªã·ãŒã«ãŒã«ãæºåããŸãã
ïŒ
ïŒ[0] http://docs.cilium.io/en/stable/policy/language/#dnsããŒã¹
ïŒ[1] http://docs.cilium.io/en/stable/install/upgrade/#changes -that-may-require-action
tofqdns-enable-pollerïŒ "false"ïŒwait-bpf-mountã¯ãbpfãã¡ã€ã«ã·ã¹ãã ãããŠã³ãããããŸã§initã³ã³ãããåŸ æ©ãããŸã
wait-bpf-mountïŒ "false"ãã¹ã«ã¬ãŒãïŒãæ¬åœã
enable-xt-socket-fallbackïŒ "true"
install-iptables-rulesïŒ "true"
auto-direct-node-routesïŒ "false"
kube-proxy-replacementïŒ "ãããŒã"
enable-host-reachable-servicesïŒ "false"
enable-external-ipsïŒ "false"
enable-node-portïŒ "false"
node-port-bind-protectionïŒ "true"
enable-auto-protect-node-port-rangeïŒ "true"
enable-endpoint-health-checkingïŒ "true"
enable-well-known-identitiesïŒ "false"
enable-remote-node-identityïŒ "true"
---ïŒãœãŒã¹ïŒcilium / charts / agent / templates / clusterrole.yamlapiVersionïŒrbac.authorization.k8s.io/v1kindïŒClusterRolemetadataïŒ
ååïŒciliumrulesïŒ
- apiGroupsïŒ
- network.k8s.io
ãªãœãŒã¹ïŒ- ãããã¯ãŒã¯ããªã·ãŒ
åè©ïŒ- ååŸãã
- ãªã¹ã
- èŠã
- apiGroupsïŒ
- Discovery.k8s.io
ãªãœãŒã¹ïŒ- ãšã³ããã€ã³ãã¹ã©ã€ã¹
åè©ïŒ- ååŸãã
- ãªã¹ã
- èŠã
- apiGroupsïŒ
- ãã
ãªãœãŒã¹ïŒ- åå空é
- ãµãŒãã¹
- ããŒã
- ãšã³ããã€ã³ã
åè©ïŒ- ååŸãã
- ãªã¹ã
- èŠã
- apiGroupsïŒ
- ãã
ãªãœãŒã¹ïŒ- ããã
- ããŒã
åè©ïŒ- ååŸãã
- ãªã¹ã
- èŠã
- æŽæ°
- apiGroupsïŒ
- ãã
ãªãœãŒã¹ïŒ- ããŒã
- ããŒã/ã¹ããŒã¿ã¹
åè©ïŒ- ããã
- apiGroupsïŒ
- apiextensions.k8s.io
ãªãœãŒã¹ïŒ- customresourcedefinitions
åè©ïŒ- äœæãã
- ååŸãã
- ãªã¹ã
- èŠã
- æŽæ°
- apiGroupsïŒ
- cilium.io
ãªãœãŒã¹ïŒ- ciliumnetworkpolicies
- ciliumnetworkpolicies / status
- ciliumclusterwidenetworkpolicies
- ciliumclusterwidenetworkpolicies / status
- ciliumendpoints
- ciliumendpoints / status
- ciliumnodes
- ciliumnodes / status
- ciliumidentities
- ciliumidentities / status
åè©ïŒ- '*'
---ïŒãœãŒã¹ïŒcilium / charts / operator / templates / clusterrole.yamlapiVersionïŒrbac.authorization.k8s.io/v1kindïŒClusterRolemetadataïŒ
ååïŒç¹æ¯-ãªãã¬ãŒã¿ãŒã«ãŒã«ïŒ- apiGroupsïŒ
- ãã
ãªãœãŒã¹ïŒ
ïŒ[core | kube] dnsããããèªåçã«åé€ããŠã
ïŒCiliumã管ç- ããã
åè©ïŒ- ååŸãã
- ãªã¹ã
- èŠã
- åé€
- apiGroupsïŒ
- Discovery.k8s.io
ãªãœãŒã¹ïŒ- ãšã³ããã€ã³ãã¹ã©ã€ã¹
åè©ïŒ- ååŸãã
- ãªã¹ã
- èŠã
- apiGroupsïŒ
- ãã
ãªãœãŒã¹ïŒ
ïŒk8sããèªåçã«èªã¿åããããŒãã®ãããCIDRãç¹æ¯ã«ã€ã³ããŒãããŸã
ïŒetcdã§ããã¹ãŠã®ããŒããå¥ã®ãããã§å®è¡ãããŠããå¥ã®ãããã«å°éããæ¹æ³ãèªèããŸã
ïŒããŒãã- ããŒã
ïŒToGroup
ãå«ãCNPã®ãšã³ããã€ã³ããžã®å€æãå®è¡ããŸã- ãµãŒãã¹
- ãšã³ããã€ã³ã
ïŒapiserverã®æ¥ç¶ã確èªãã- åå空é
åè©ïŒ- ååŸãã
- ãªã¹ã
- èŠã
- apiGroupsïŒ
- cilium.io
ãªãœãŒã¹ïŒ- ciliumnetworkpolicies
- ciliumnetworkpolicies / status
- ciliumclusterwidenetworkpolicies
- ciliumclusterwidenetworkpolicies / status
- ciliumendpoints
- ciliumendpoints / status
- ciliumnodes
- ciliumnodes / status
- ciliumidentities
- ciliumidentities / status
åè©ïŒ- '*'
---ïŒãœãŒã¹ïŒcilium / charts / agent / templates / clusterrolebinding.yamlapiVersionïŒrbac.authorization.k8s.io/v1kindïŒClusterRoleBindingmetadataïŒ
ååïŒciliumroleRefïŒ
apiGroupïŒrbac.authorization.k8s.io
çš®é¡ïŒClusterRole
ååïŒciliumsubjectsïŒ- çš®é¡ïŒServiceAccount
ååïŒç¹æ¯
åå空éïŒkube-system
---ïŒãœãŒã¹ïŒcilium / charts / operator / templates / clusterrolebinding.yamlapiVersionïŒrbac.authorization.k8s.io/v1kindïŒClusterRoleBindingmetadataïŒ
ååïŒcilium-operatorroleRefïŒ
apiGroupïŒrbac.authorization.k8s.io
çš®é¡ïŒClusterRole
ååïŒç¹æ¯-ãªãã¬ãŒã¿ãŒè¢«éšè ïŒ- çš®é¡ïŒServiceAccount
ååïŒç¹æ¯ãªãã¬ãŒã¿ãŒ
åå空éïŒkube-system
---ïŒãœãŒã¹ïŒcilium / charts / agent /ãã³ãã¬ãŒã/daemonset.yamlapiããŒãžã§ã³ïŒapps / v1kindïŒDaemonSetmetadataïŒ
ã©ãã«ïŒ
k8s-appïŒç¹æ¯
ååïŒç¹æ¯
åå空éïŒkube-systemspecïŒ
ã»ã¬ã¯ã¿ïŒ
matchLabelsïŒ
k8s-appïŒç¹æ¯
ãã³ãã¬ãŒãïŒ
ã¡ã¿ããŒã¿ïŒ
泚éïŒ
ïŒãã®ã¢ãããŒã·ã§ã³ãšCriticalAddonsOnlyã®èš±å®¹ç¯å²ã«ãã
ïŒç¹æ¯ã¯ã¯ã©ã¹ã¿ãŒå ã®éèŠãªãããã§ãããç¹æ¯ã確ä¿ããŸã
ïŒåªå ã¹ã±ãžã¥ãŒãªã³ã°ãååŸããŸãã
ïŒhttpsïŒ//kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
Scheduler.alpha.kubernetes.io/critical-podïŒ ""
ã©ãã«ïŒ
k8s-appïŒç¹æ¯
ã¹ããã¯ïŒ
ã³ã³ããïŒ
- åŒæ°ïŒ
- --config-dir = / tmp / cilium / config-map
ã³ãã³ãïŒ- ç¹æ¯å€
livenessProbeïŒ
execïŒ
ã³ãã³ãïŒ
- ç¹æ¯
- ç¶æ
- -ç°¡åãªèª¬æ
failureThresholdïŒ10
ïŒæŽ»æ§ãããŒãã®åæé 延ã¯æå³çã«å€§ãããªããŸã
ïŒæåã®å Žåãç¡éã®ãã«ïŒãªã¹ã¿ãŒããµã€ã¯ã«ãåé¿ãã
ïŒããŒãã¹ãã©ããã«äºæ³ãããæéãããããŸãã
initialDelaySecondsïŒ120
periodSecondsïŒ30
successThresholdïŒ1
timeoutSecondsïŒ5
readinessProbeïŒ
execïŒ
ã³ãã³ãïŒ- ç¹æ¯
- ç¶æ
- -ç°¡åãªèª¬æ
failureThresholdïŒ3
initialDelaySecondsïŒ5
periodSecondsïŒ30
successThresholdïŒ1
timeoutSecondsïŒ5
envïŒ- ååïŒK8S_NODE_NAME
valueFromïŒ
fieldRefïŒ
apiVersionïŒv1
fieldPathïŒspec.nodeName- ååïŒCILIUM_K8S_NAMESPACE
valueFromïŒ
fieldRefïŒ
apiVersionïŒv1
fieldPathïŒmetadata.namespace- ååïŒCILIUM_FLANNEL_MASTER_DEVICE
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒãã©ã³ãã«-ãã¹ã¿ãŒ-ããã€ã¹
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_FLANNEL_UNINSTALL_ON_EXIT
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒflannel-uninstall-on-exit
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_CLUSTERMESH_CONFIG
å€ïŒ/ var / lib / cilium / clustermesh /- ååïŒCILIUM_CNI_CHAINING_MODE
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒcni-chaining-mode
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_CUSTOM_CNI_CONF
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒcustom-cni-conf
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue
ç»åïŒã docker.io/cilium/cilium:v1.7.6 ã
imagePullPolicyïŒIfNotPresent
ã©ã€ããµã€ã¯ã«ïŒ
postStartïŒ
execïŒ
ã³ãã³ãïŒ
- ã/cni-install.shã
- "--enable-debug = false"
preStopïŒ
execïŒ
ã³ãã³ãïŒ- /cni-uninstall.sh
ååïŒç¹æ¯å€
securityContextïŒ
æ©èœïŒ
è¿œå ïŒ
- NET_ADMIN
- SYS_MODULE
ç¹æš©ïŒtrue
volumeMountsïŒ- mountPathïŒ/ var / run / cilium
ååïŒç¹æ¯-run- mountPathïŒ/ host / opt / cni / bin
ååïŒcni-path- mountPathïŒ/host/etc/cni/net.d
ååïŒetc-cni-netd- mountPathïŒ/ var / lib / cilium / clustermesh
ååïŒclustermesh-secrets
readOnlyïŒtrue- mountPathïŒ/ tmp / cilium / config-map
ååïŒcilium-config-path
readOnlyïŒtrue
ïŒã«ãŒãã«ã¢ãžã¥ãŒã«ãããŒãã§ããããã«ããå¿ èŠããããŸã- mountPathïŒ/ lib / modules
ååïŒlib-modules
readOnlyïŒtrue- mountPathïŒ/run/xtables.lock
ååïŒxtables-lock
hostNetworkïŒtrue
initContainersïŒ- ã³ãã³ãïŒ
- /init-container.sh
envïŒ- ååïŒCILIUM_ALL_STATE
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒclean-cilium-state
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_BPF_STATE
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒclean-cilium-bpf-state
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_WAIT_BPF_MOUNT
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒwait-bpf-mount
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue
ç»åïŒã docker.io/cilium/cilium:v1.7.6 ã
imagePullPolicyïŒIfNotPresent
ååïŒclean-cilium-state
securityContextïŒ
æ©èœïŒ
è¿œå ïŒ
- NET_ADMIN
ç¹æš©ïŒtrue
volumeMountsïŒ- mountPathïŒ/ var / run / cilium
ååïŒç¹æ¯-run
restartPolicyïŒåžžã«
priorityClassNameïŒã·ã¹ãã ããŒãã¯ãªãã£ã«ã«
serviceAccountïŒç¹æ¯
serviceAccountNameïŒç¹æ¯
çµäºGracePeriodSecondsïŒ1
蚱容ç¯å²ïŒ- ãªãã¬ãŒã¿ãŒïŒååšããŸã
ããªã¥ãŒã ïŒ
ïŒåèµ·å/ã¢ããã°ã¬ãŒãéã§ç¶æ ãç¶æããã«ã¯- hostPathïŒ
ãã¹ïŒ/ var / run / cilium
ã¿ã€ãïŒDirectoryOrCreate
ååïŒç¹æ¯-run
ïŒãã¹ãã«ç¹æ¯cniãã©ã°ã€ã³ãã€ã³ã¹ããŒã«ããã«ã¯- hostPathïŒ
ãã¹ïŒ/ opt / cni / bin
ã¿ã€ãïŒDirectoryOrCreate
ååïŒcni-path
ïŒãã¹ãã«ç¹æ¯cniæ§æãã€ã³ã¹ããŒã«ããã«ã¯- hostPathïŒ
ãã¹ïŒ/etc/cni/net.d
ã¿ã€ãïŒDirectoryOrCreate
ååïŒetc-cni-netd
ïŒã«ãŒãã«ã¢ãžã¥ãŒã«ãããŒãã§ããããã«ãã- hostPathïŒ
ãã¹ïŒ/ lib / modules
ååïŒlib-modules
ïŒä»ã®ããã»ã¹ïŒkube-proxyãªã©ïŒãšåæã«iptablesã«ã¢ã¯ã»ã¹ããã«ã¯- hostPathïŒ
ãã¹ïŒ/run/xtables.lock
ã¿ã€ãïŒFileOrCreate
ååïŒxtables-lock
ïŒclustermeshæ§æãèªã¿åãã«ã¯- ååïŒclustermesh-secrets
ç§å¯ïŒ
defaultModeïŒ420
ãªãã·ã§ã³ïŒtrue
secretNameïŒcilium-clustermesh
ïŒæ§æãããããæ§æãèªã¿åãã«ã¯- configMapïŒ
ååïŒcilium-config
ååïŒcilium-config-path
updateStrategyïŒ
RollingUpdateïŒ
maxUnavailableïŒ2
ã¿ã€ãïŒRollingUpdate
---ïŒãœãŒã¹ïŒcilium / charts / operator / templates / deployment.yamlapiVersionïŒapps / v1kindïŒDeploymentmetadataïŒ
ã©ãã«ïŒ
io.cilium / appïŒæŒç®å
ååïŒç¹æ¯ãªãã¬ãŒã¿ãŒ
ååïŒç¹æ¯ãªãã¬ãŒã¿ãŒ
åå空éïŒkube-systemspecïŒ
ã¬ããªã«ïŒ1
ã»ã¬ã¯ã¿ïŒ
matchLabelsïŒ
io.cilium / appïŒæŒç®å
ååïŒç¹æ¯ãªãã¬ãŒã¿ãŒ
æŠç¥ïŒ
RollingUpdateïŒ
maxSurgeïŒ1
maxUnavailableïŒ1
ã¿ã€ãïŒRollingUpdate
ãã³ãã¬ãŒãïŒ
ã¡ã¿ããŒã¿ïŒ
泚éïŒ
ã©ãã«ïŒ
io.cilium / appïŒæŒç®å
ååïŒç¹æ¯ãªãã¬ãŒã¿ãŒ
ã¹ããã¯ïŒ
ã³ã³ããïŒ- åŒæ°ïŒ
- --debug = $ïŒCILIUM_DEBUGïŒ
- --identity-allocation-mode = $ïŒCILIUM_IDENTITY_ALLOCATION_MODEïŒ
- --synchronize-k8s-nodes = true
ã³ãã³ãïŒ- ç¹æ¯ãªãã¬ãŒã¿ãŒ
envïŒ- ååïŒCILIUM_K8S_NAMESPACE
valueFromïŒ
fieldRefïŒ
apiVersionïŒv1
fieldPathïŒmetadata.namespace- ååïŒK8S_NODE_NAME
valueFromïŒ
fieldRefïŒ
apiVersionïŒv1
fieldPathïŒspec.nodeName- ååïŒCILIUM_DEBUG
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒãããã°
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_CLUSTER_NAME
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒã¯ã©ã¹ã¿ãŒå
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_CLUSTER_ID
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒcluster-id
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_IPAM
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒipam
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_DISABLE_ENDPOINT_CRD
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒdisable-endpoint-crd
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_KVSTORE
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒkvstore
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_KVSTORE_OPT
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒkvstore-opt
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue- ååïŒAWS_ACCESS_KEY_ID
valueFromïŒ
secretKeyRefïŒ
ããŒïŒAWS_ACCESS_KEY_ID
ååïŒç¹æ¯-aws
ãªãã·ã§ã³ïŒtrue- ååïŒAWS_SECRET_ACCESS_KEY
valueFromïŒ
secretKeyRefïŒ
ããŒïŒAWS_SECRET_ACCESS_KEY
ååïŒç¹æ¯-aws
ãªãã·ã§ã³ïŒtrue- ååïŒAWS_DEFAULT_REGION
valueFromïŒ
secretKeyRefïŒ
ããŒïŒAWS_DEFAULT_REGION
ååïŒç¹æ¯-aws
ãªãã·ã§ã³ïŒtrue- ååïŒCILIUM_IDENTITY_ALLOCATION_MODE
valueFromïŒ
configMapKeyRefïŒ
ããŒïŒã¢ã€ãã³ãã£ãã£å²ãåœãŠã¢ãŒã
ååïŒcilium-config
ãªãã·ã§ã³ïŒtrue
ç»åïŒã docker.io/cilium/operator:v1.7.6 ã
imagePullPolicyïŒIfNotPresent
ååïŒç¹æ¯ãªãã¬ãŒã¿ãŒ
livenessProbeïŒ
httpGetïŒ
ãã¹ãïŒ '127.0.0.1'
ãã¹ïŒ/ healthz
ããŒãïŒ9234
ã¹ããŒã ïŒHTTP
initialDelaySecondsïŒ60
periodSecondsïŒ10
timeoutSecondsïŒ3
hostNetworkïŒtrue
restartPolicyïŒåžžã«
serviceAccountïŒcilium-operator
serviceAccountNameïŒcilium-operatorâ
ããªããå²ãåœãŠãããã®ã§ãããªãã¯ãããåãåã£ãŠããŸãã
ãã®ã¡ãŒã«ã«çŽæ¥è¿ä¿¡ããGitHubã§è¡šç€ºããŠãã ãã
https://github.com/kubernetes/kubernetes/issues/91601#issuecomment-656404841 ã
ãŸãã¯è³Œèªã解é€ãã
https://github.com/notifications/unsubscribe-auth/AAJ5E6BMTNCADT5K7D4PMF3R2ZJRVANCNFSM4NOTPEDA
ã
ãã°ã¬ãã«ãäžããgrepã䜿çšããŠããŒãããã£ã«ã¿ãªã³ã°ããŠã¿ãŠãã ãã
ãŸãã¯ãããïŒ
2020幎7æ9æ¥æšææ¥ãååŸ7æ55ådilyevskyã notifications @ github.com
æžããŸããïŒ
ã¹ã±ãžã¥ãŒã©ãã°ã¯æ¬¡ã®ãšããã§ãã
I0709 23ïŒ08ïŒ22.056081 1registry.goïŒ150] EvenPodsSpreadè¿°èªãšåªå 床é¢æ°ã®ç»é²
I0709 23ïŒ08ïŒ23.137451 1serving.goïŒ313]ã¡ã¢ãªå ã«çæãããèªå·±çœ²å蚌ææž
W0709 23ïŒ08ïŒ33.843509 1 authentication.goïŒ297]ã¯ã©ã¹ã¿ãŒå èªèšŒæ§æã®æ€çŽ¢äžã«ãšã©ãŒãçºçããŸããïŒetcdserverïŒèŠæ±ãã¿ã€ã ã¢ãŠãããŸãã
W0709 23ïŒ08ïŒ33.843671 1 authentication.goïŒ298]èªèšŒæ§æãªãã§ç¶è¡ããŸãã ããã«ããããã¹ãŠã®ãªã¯ãšã¹ããå¿åãšããŠæ±ãããå ŽåããããŸãã
W0709 23ïŒ08ïŒ33.843710 1 authentication.goïŒ299]èªèšŒæ§æã®ã«ãã¯ã¢ãããæåãããã«ã¯ã-authentication-tolerate-lookup-failure = falseãèšå®ããŸãã
I0709 23ïŒ08ïŒ33.911805 1registry.goïŒ150] EvenPodsSpreadè¿°èªãšåªå 床é¢æ°ã®ç»é²
I0709 23ïŒ08ïŒ33.911989 1 Registry.goïŒ150] EvenPodsSpreadè¿°èªãšåªå 床é¢æ°ã®ç»é²
W0709 23ïŒ08ïŒ33.917999 1authorization.goïŒ47]èªèšŒãç¡å¹ã«ãªã£ãŠããŸã
W0709 23ïŒ08ïŒ33.918162 1 authentication.goïŒ40]èªèšŒãç¡å¹ã«ãªã£ãŠããŸã
I0709 23ïŒ08ïŒ33.918238 1 deprecated_insecure_serving.goïŒ51] [::]ïŒ10251ã§healthzãå®å šã«æäŸããŠããŸãã
I0709 23ïŒ08ïŒ33.925860 1 configmap_cafile_content.goïŒ202] client-ca :: kube-system :: extension-apiserver-authentication :: client-ca-fileãèµ·åããŠããŸã
I0709 23ïŒ08ïŒ33.926013 1 shared_informer.goïŒ223] client-ca :: kube-system :: extension-apiserver-authentication :: client-ca-fileã®ãã£ãã·ã¥ãåæããã®ãåŸ æ©ããŠããŸã
I0709 23ïŒ08ïŒ33.930685 1 secure_serving.goïŒ178] 127.0.0.1:10259ã§å®å šã«ãµãŒãã¹ãæäŸ
I0709 23ïŒ08ïŒ33.936198 1 tlsconfig.goïŒ240] DynamicServingCertificateControllerãéå§ããŠããŸã
I0709 23ïŒ08ïŒ34.026382 1 shared_informer.goïŒ230]ãã£ãã·ã¥ã¯client-ca :: kube-system :: extension-apiserver-authentication :: client-ca-fileã«å¯ŸããŠåæãããŸã
I0709 23ïŒ08ïŒ34.036998 1leaderelection.goïŒ242]ãªãŒããŒãªãŒã¹kube-system / kube-schedulerãååŸããããšããŠããŸã...
I0709 23ïŒ08ïŒ50.597201 1leaderelection.goïŒ252]ãªãŒã¹kube-system / kube-schedulerã®ååŸã«æåããŸãã
E0709 23ïŒ08ïŒ50.658551 1 factory.goïŒ503]ãããïŒkube-system / coredns-66bff467f8-9rjvdã¯ãã§ã«ã¢ã¯ãã£ããã¥ãŒã«ååšããŸã
E0709 23ïŒ12ïŒ27.673854 1 factory.goïŒ503]ãããkube-system / cilium-vv466ã¯ãã§ã«ããã¯ãªããã¥ãŒã«ååšããŸã
E0709 23ïŒ12ïŒ58.099432 1leaderelection.goïŒ320]ãªãœãŒã¹ããã¯ã®ååŸäžã«ãšã©ãŒãçºçããŸããkube-system / kube-schedulerïŒetcdserverïŒãªãŒããŒãå€æŽãããŸããã¹ã±ãžã¥ãŒã©ããããåèµ·åãããšãä¿çäžã®ãããã¯ããã«ã¹ã±ãžã¥ãŒã«ãèšå®ããŸãã
â
ããªããå²ãåœãŠãããã®ã§ãããªãã¯ãããåãåã£ãŠããŸãã
ãã®ã¡ãŒã«ã«çŽæ¥è¿ä¿¡ããGitHubã§è¡šç€ºããŠãã ãã
https://github.com/kubernetes/kubernetes/issues/91601#issuecomment-656406215 ã
ãŸãã¯è³Œèªã解é€ãã
https://github.com/notifications/unsubscribe-auth/AAJ5E6E4QPGNNBFUYSZEJC3R2ZKHDANCNFSM4NOTPEDA
ã
ãããã¯ã€ãã³ãã§ãïŒ
`` `ã€ãã³ãïŒ
ã¡ãã»ãŒãžããçç±å¹Žéœ¢ãå
¥å
---- ------ ---- ---- -------
èŠåFailedScheduling
èŠåFailedScheduling
The node only has two taints but the pod tolerates all existing taints and yeah it seems to only happen on masters:
æ±æïŒnode-role.kubernetes.io/ masterïŒNoSchedule
node.kubernetes.io/network-å©çšäžå¯ïŒNoSchedule
There is enough space and pod is best effort with no reservation anyway:
``` Resource Requests Limits
-------- -------- ------
cpu 650m (32%) 0 (0%)
memory 70Mi (0%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-gce-pd 0 0
ã¹ã±ãžã¥ãŒã©ã®ãã°ã¬ãã«ãäžããŠã¿ãŸã...
ãããyamlã«ã¯å®éã«ã¯node-role.kubernetes.io/master
蚱容ç¯å²ããããŸããã ãããã£ãŠããã¹ã¿ãŒã§ã¹ã±ãžã¥ãŒã«ãããã¹ãã§ã¯ãããŸããã§ããã
ããã«ã¡ã¯ïŒ ç§ãã¡ã¯åãåé¡ã«çŽé¢ããŠããŸãã ãã ããå±éã§ãåãåé¡ãçºçããŸãããã®å Žåãéã¢ãã£ããã£ã䜿çšããŠãããããåããŒããŸãã¯ç¹å®ã®ããŒããã¿ãŒã²ãããšãããããã»ã¬ã¯ã¿ãŒã§ã¹ã±ãžã¥ãŒã«ãããããã«ããŸãã
倱æããããŒãã®ãã¹ãåã«äžèŽããããã«èšå®ãããããŒãã»ã¬ã¯ã¿ãŒã䜿çšããŠããããäœæããã ãã§ãã¹ã±ãžã¥ãŒãªã³ã°ã倱æããŸããã 5ã€ã®ããŒããã»ã¬ã¯ã¿ãŒãšäžèŽããŠããªããšèšã£ãŠããŸãããã6çªç®ã®ããŒãã«ã€ããŠã¯äœããããŸããã§ããã ã¹ã±ãžã¥ãŒã©ãåèµ·åãããšãåé¡ã解決ããŸããã ãã®ããŒãã«ã€ããŠäœãããã£ãã·ã¥ãããããŒãã§ã®ã¹ã±ãžã¥ãŒãªã³ã°ã劚ããããŠããããã«èŠããŸãã
ä»ã®äººãåã«èšã£ãããã«ãç§ãã¡ã¯å€±æã«ã€ããŠãã°ã«äœããããŸããã
倱æããå±éãæå°éã«æããŸããïŒå€±æããŠãããã¹ã¿ãŒã®æ±æãåé€ããŸããïŒã
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
restartPolicy: Always
schedulerName: default-scheduler
nodeSelector:
kubernetes.io/hostname: master-2
ãã¹ã¿ãŒãæ±ããæã£ãŠãããšããåãåé¡ããããå±éã¯æ±ãã«å¯Ÿãã蚱容ç¯å²ã§ããã ãããã£ãŠãããŒã¢ã³ã»ããã蚱容ç¯å²ããŸãã¯ã¢ãã£ããã£/éã¢ãã£ããã£ã«ç¹ã«é¢é£ããŠããããã«ã¯èŠããŸããã é害ãçºçãå§ãããšãç¹å®ã®ããŒãã察象ãšãããã®ã¯äœãã¹ã±ãžã¥ãŒã«ã§ããŸããã 1.18.2ãã1.18.5ãŸã§ã®åé¡ãèŠãããŸãïŒ1.18.0ãŸãã¯.1ã§ã¯è©ŠããŠããªãïŒ
倱æããããŒãã®ãã¹ãåã«äžèŽããããã«èšå®ãããããŒãã»ã¬ã¯ã¿ãŒã䜿çšããŠããããäœæããã ãã§ãã¹ã±ãžã¥ãŒãªã³ã°ã倱æããåå ã«ãªããŸããã
ãã®ãããªããããäœæããåŸããŸãã¯ãã以åã«å€±æãå§ãããã©ãããæ確ã«ã§ããŸããïŒ ãã®ããŒãã«ã¯ããããã蚱容ã§ããªãæ±æããªãã£ããšæããŸãã
@nodoã¯åçŸã«åœ¹ç«ã¡ãŸãã NodeSelectorã®ã³ãŒããèŠãŠããã ããŸããïŒ ãã¹ãäžã«ãã°è¡ãè¿œå ããå¿ èŠãããå ŽåããããŸãã ãã£ãã·ã¥ãå°å·ããããšãã§ããŸãã
$ pidof kube-scheduler
$ sudo kill -SIGUSR2 <pid>
ã ããã¯ã¹ã±ãžã¥ãŒã©ããã»ã¹ã匷å¶çµäºããªãããšã«æ³šæããŠãã ããã/åªå 床ã¯ãªãã£ã«ã«-ç·æ¥
/ unassign
ãã®ãã¹ããããã€ã¡ã³ãããããã€ããããšããåã«ãããŒã¢ã³ã»ãããšãããã€ã¡ã³ãããä¿çäžãã§ã¹ã¿ãã¯ããŠããã®ããã§ã«ç¢ºèªããŠããããããã§ã«å€±æããŠããŸããã æ±æç©è³ªã¯ããŒãããåé€ãããŠããŸããã
çŸåšãããŒããåèµ·åããå¿
èŠããããåé¡ã衚瀺ãããªããªã£ãããããããçºçããŠããç°å¢ã倱ãããŸããã åçŸæ¬¡ç¬¬ã詳现ããç¥ããããŸã
ããããŠãã ããã ç§ã¯éå»ã«ãããåçŸããããšããŸãããæåããŸããã§ããã ç§ã¯å€±æã®æåã®äŸã«ãã£ãšèå³ããããŸãã ããã¯ãŸã æ±æã«é¢é£ããŠããå¯èœæ§ããããŸãã
åé¡ãåçŸããŸããã ç§ã¯ããªããèŠæ±ããã³ãã³ããå®è¡ããŸãããããã«æ å ±ããããŸãïŒ
I0716 14:47:52.768362 1 factory.go:462] Unable to schedule default/test-deployment-558f47bbbb-4rt5t: no fit: 0/6 nodes are available: 5 node(s) didn't match node selector.; waiting
I0716 14:47:52.768683 1 scheduler.go:776] Updating pod condition for default/test-deployment-558f47bbbb-4rt5t to (PodScheduled==False, Reason=Unschedulable)
I0716 14:47:53.018781 1 httplog.go:90] verb="GET" URI="/healthz" latency=299.172µs resp=200 UserAgent="kube-probe/1.18" srcIP="127.0.0.1:57258":
I0716 14:47:59.469828 1 comparer.go:42] cache comparer started
I0716 14:47:59.470936 1 comparer.go:67] cache comparer finished
I0716 14:47:59.471038 1 dumper.go:47] Dump of cached NodeInfo
I0716 14:47:59.471484 1 dumper.go:49]
Node name: master-0-bug
Requested Resources: {MilliCPU:1100 Memory:52428800 EphemeralStorage:0 AllowedPodNumber:0 ScalarResources:map[]}
Allocatable Resources:{MilliCPU:2000 Memory:3033427968 EphemeralStorage:19290208634 AllowedPodNumber:110 ScalarResources:map[hugepages-1Gi:0 hugepages-2Mi:0]}
Scheduled Pods(number: 9):
...
I0716 14:47:59.472623 1 dumper.go:60] Dump of scheduling queue:
name: coredns-cd64c8d7c-29zjq, namespace: kube-system, uid: 938e8827-5d17-4db9-ac04-d229baf4534a, phase: Pending, nominated node:
name: test-deployment-558f47bbbb-4rt5t, namespace: default, uid: fa19fda9-c8d6-4ffe-b248-8ddd24ed5310, phase: Pending, nominated node:
æ®å¿µãªãããããã¯åœ¹ã«ç«ããªãããã§ã
ãã£ãã·ã¥ã®ãã³ãã¯ãããã°çšã§ãããäœãå€æŽãããŸããã ãã³ããå«ããŠããã ããŸãããïŒ
ãŸãããããæåã®ãšã©ãŒã§ãããšä»®å®ããŠããããyamlãšããŒããå«ããããšãã§ããŸããïŒ
ãã³ãããããã®ã¯ã»ãšãã©ãã¹ãŠã§ããä»ã®ããŒããåé€ããã ãã§ãã ããã¯æåã®ãšã©ãŒã§ã¯ãããŸããã§ãããããã³ãã«corednsãããã衚瀺ãããŠããŸãããããæåã®ãšã©ãŒã§ãã ãã³ãã§ä»ã«äœãæ±ããŠããã®ãããããŸããã
yamlããã§ããããŸã
ãããã§ãããªããé¢é£ããããŒããšããããããªãã³ã°ããããšã«æ°ã¥ããŸããã§ããã
ãã ãããã®ããŒãã®ã¹ã±ãžã¥ãŒã«ãããããããå«ããããšã¯ã§ããŸããïŒ ãªãœãŒã¹äœ¿çšéã®èšç®ã«ãã°ãããå Žåã«åããŠã
Requested Resources: {MilliCPU:1100 Memory:52428800 EphemeralStorage:0 AllowedPodNumber:0 ScalarResources:map[]}
ãã®AllowedPodNumber: 0
ã¯å¥åŠã«æããŸãã
ãã®ããŒãäžã®ä»ã®ãããã¯æ¬¡ã®ãšããã§ãã
`
name: kube-controller-manager-master-0-bug, namespace: kube-system, uid: 095eebb0-4752-419b-aac7-245e5bc436b8, phase: Running, nominated node:
name: kube-proxy-xwf6h, namespace: kube-system, uid: 16552eaf-9eb8-4584-ba3c-7dff6ce92592, phase: Running, nominated node:
name: kube-apiserver-master-0-bug, namespace: kube-system, uid: 1d338e26-b0bc-4cef-9bad-86b7dd2b2385, phase: Running, nominated node:
name: kube-multus-ds-amd64-tpkm8, namespace: kube-system, uid: d50c0c7f-599c-41d5-a029-b43352a4f5b8, phase: Running, nominated node:
name: openstack-cloud-controller-manager-wrb8n, namespace: kube-system, uid: 17aeb589-84a1-4416-a701-db6d8ef60591, phase: Running, nominated node:
name: kube-scheduler-master-0-bug, namespace: kube-system, uid: 52469084-3122-4e99-92f6-453e512b640f, phase: Running, nominated node:
name: subport-controller-28j9v, namespace: kube-system, uid: a5a07ac8-763a-4ff2-bdae-91c6e9e95698, phase: Running, nominated node:
name: csi-cinder-controllerplugin-0, namespace: kube-system, uid: 8b16d6c8-a871-454e-98a3-0aa545f9c9d0, phase: Running, nominated node:
name: calico-node-d899t, namespace: kube-system, uid: e3672030-53b1-4356-a5df-0f4afd6b9237, phase: Running, nominated node:
ãã¹ãŠã®ããŒãã§ããã³ãå ã®èŠæ±ããããªãœãŒã¹ã§allowedPodNumberã0ã«èšå®ãããŠããŸãããä»ã®ããŒãã¯ã¹ã±ãžã¥ãŒã«å¯èœã§ã
ããŒãyamlïŒ
apiVersion: v1
kind: Node
metadata:
annotations:
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2020-07-16T09:59:48Z"
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: 54019dbc-10d7-409c-8338-5556f61a9371
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: regionOne
failure-domain.beta.kubernetes.io/zone: nova
kubernetes.io/arch: amd64
kubernetes.io/hostname: master-0-bug
kubernetes.io/os: linux
node-role.kubernetes.io/master: ""
node.kubernetes.io/instance-type: 54019dbc-10d7-409c-8338-5556f61a9371
node.uuid: 00324054-405e-4fae-a3bf-d8509d511ded
node.uuid_source: cloud-init
topology.kubernetes.io/region: regionOne
topology.kubernetes.io/zone: nova
name: master-0-bug
resourceVersion: "85697"
selfLink: /api/v1/nodes/master-0-bug
uid: 629b6ef3-3c76-455b-8b6b-196c4754fb0e
spec:
podCIDR: 192.168.0.0/24
podCIDRs:
- 192.168.0.0/24
providerID: openstack:///00324054-405e-4fae-a3bf-d8509d511ded
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
status:
addresses:
- address: 10.0.10.14
type: InternalIP
- address: master-0-bug
type: Hostname
allocatable:
cpu: "2"
ephemeral-storage: "19290208634"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 2962332Ki
pods: "110"
capacity:
cpu: "2"
ephemeral-storage: 20931216Ki
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 3064732Ki
pods: "110"
conditions:
- lastHeartbeatTime: "2020-07-16T10:02:20Z"
lastTransitionTime: "2020-07-16T10:02:20Z"
message: Calico is running on this node
reason: CalicoIsUp
status: "False"
type: NetworkUnavailable
- lastHeartbeatTime: "2020-07-16T15:46:11Z"
lastTransitionTime: "2020-07-16T09:59:43Z"
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: "2020-07-16T15:46:11Z"
lastTransitionTime: "2020-07-16T09:59:43Z"
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: "2020-07-16T15:46:11Z"
lastTransitionTime: "2020-07-16T09:59:43Z"
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: "2020-07-16T15:46:11Z"
lastTransitionTime: "2020-07-16T10:19:44Z"
message: kubelet is posting ready status. AppArmor enabled
reason: KubeletReady
status: "True"
type: Ready
daemonEndpoints:
kubeletEndpoint:
Port: 10250
nodeInfo:
architecture: amd64
bootID: fe410ed3-2825-4f94-a9f9-08dc5e6a955e
containerRuntimeVersion: docker://19.3.11
kernelVersion: 4.12.14-197.45-default
kubeProxyVersion: v1.18.5
kubeletVersion: v1.18.5
machineID: 00324054405e4faea3bfd8509d511ded
operatingSystem: linux
systemUUID: 00324054-405e-4fae-a3bf-d8509d511ded
ãšãããïŒ
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-07-16T10:13:35Z"
generateName: pm-node-exporter-
labels:
controller-revision-hash: 6466d9c7b
pod-template-generation: "1"
name: pm-node-exporter-mn9vj
namespace: monitoring
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: pm-node-exporter
uid: 5855a26f-a57e-4b0e-93f2-461c19c477e1
resourceVersion: "5239"
selfLink: /api/v1/namespaces/monitoring/pods/pm-node-exporter-mn9vj
uid: 0db09c9c-1618-4454-94fa-138e55e5ebd7
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- master-0-bug
containers:
- args:
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
image: ***
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 9100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
name: pm-node-exporter
ports:
- containerPort: 9100
hostPort: 9100
name: metrics
protocol: TCP
resources:
limits:
cpu: 200m
memory: 150Mi
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /host/proc
name: proc
readOnly: true
- mountPath: /host/sys
name: sys
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: pm-node-exporter-token-csllf
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostNetwork: true
hostPID: true
nodeSelector:
node-role.kubernetes.io/master: ""
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: pm-node-exporter
serviceAccountName: pm-node-exporter
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/disk-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/pid-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/unschedulable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/network-unavailable
operator: Exists
volumes:
- hostPath:
path: /proc
type: ""
name: proc
- hostPath:
path: /sys
type: ""
name: sys
- name: pm-node-exporter-token-csllf
secret:
defaultMode: 420
secretName: pm-node-exporter-token-csllf
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-07-16T10:13:35Z"
message: '0/6 nodes are available: 2 node(s) didn''t have free ports for the requested
pod ports, 3 node(s) didn''t match node selector.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
ãã¹ãŠã®æ å ±ãããããšãã @nodoããªãã¯ãããåãããšãã§ããŸããïŒ
ãŸãã httpsïŒ//github.com/Nordix/kubernetes/commit/5c00cdf195fa61316f963f59e73c6cafc2ad9bdcã䜿çšããŠã詳现æ å ±ãååŸããããšããŠã
/å©ããŠ
@maelkã¯ããã°ãèŠã€ããå Žåã¯ãããã
@alculquicondor ïŒ
ãã®ãªã¯ãšã¹ãã¯ãå¯çš¿è
ããã®æ¯æŽãå¿
èŠã§ãããšããŒã¯ãããŠããŸãã
ãªã¯ãšã¹ããããã«èšèŒãããŠ
ãã®ãªã¯ãšã¹ãããããã®èŠä»¶ãæºãããªããªã£ãå Žåã¯ãã©ãã«ãåé€ã§ããŸã
/remove-help
ã³ãã³ãã§ã³ã¡ã³ãããŸãã
察å¿ããŠããã®ïŒ
/å©ããŠ
@maelkã¯ããã°ãèŠã€ããå Žåã¯ãããã
PRã³ã¡ã³ãã䜿çšããŠç§ãšããåãããããã®æé ã¯ããã¡ãããå
¥æã§ãkubernetes / test-infraãªããžããªã«å¯ŸããŠåé¡ã
/å²åœ
@maelkãã®åé¡ãæåã«çºçããã¿ã€ãã³ã°ã«åºæã®äœãã¯ãããŸããïŒ ããšãã°ãããŒããèµ·åããçŽåŸã«çºçããŸããïŒ
ããããããã§ã¹ã±ãžã¥ãŒã«ãããŠæ£åžžã«å®è¡ãããããããããªããããŸãã ãã ããåé¡ãçºçãããšãã¹ã±ãžã¥ãŒã«ãèšå®ã§ããªããªããŸãã
åçŸå¯èœãªã±ãŒã¹ãåŸããããŸã§åªå 床ãäžããŸãã
è¿œå ã®ãã°ãšã³ããªãæã€ã¹ã±ãžã¥ãŒã©ã䜿çšããŠããã°ãåçŸããããšãã§ããŸããã ãã¹ã¿ãŒã®1ã€ããç¹°ãè¿ãããããŒãã®ãªã¹ãããå®å šã«æ¶ããŠããããšãããããŸãã ããã»ã¹ãïŒã¹ãããã·ã§ããããã®ïŒ6ã€ã®ããŒãããå§ãŸãããšãããããŸãã
I0720 13:58:28.246507 1 generic_scheduler.go:441] Looking for a node for kube-system/coredns-cd64c8d7c-tcxbq, going through []*nodeinfo.NodeInfo{(*nodeinfo.NodeInfo)(0xc000326a90), (*nodeinfo.NodeInfo)(0xc000952000), (*nodeinfo.NodeInfo)(0xc0007d08f0), (*nodeinfo.NodeInfo)(0xc0004f35f0), (*nodeinfo.NodeInfo)(0xc000607040), (*nodeinfo.NodeInfo)(0xc000952000)}
ããããã®åŸã5ããŒã以äžããå埩ããªãããšããããã次ã®ããã«ãªããŸãã
I0720 13:58:28.247420 1 generic_scheduler.go:505] pod kube-system/coredns-cd64c8d7c-tcxbq : processed 5 nodes, 0 fit
ãã®ãããããŒãã®1ã€ãæœåšçãªããŒãã®ãªã¹ãããåé€ãããŸãã æ®å¿µãªãããããã»ã¹ã®éå§æã«ååãªãã°ããããŸããã§ããããããã«å€ãã®ãã°ãååŸããããšããŸãã
ãã°è¡ã«ããã³ãŒãåç §ïŒ
@maelk
%v/%v on node %v, too many nodes fit
è¡ãèŠãŸãããïŒ
ãã以å€ã®å Žåã @ pancernikã¯workqueue.ParallelizeUntil(ctx, 16, len(allNodes), checkNode)
ãã°ããã§ãã¯ã§ããŸããïŒ
ãããããã®ãã°ã¯è¡šç€ºãããŸããã§ããã ãŸãã䞊ååã«åé¡ãããããããŒãã以åã«ãã£ã«ã¿ãŒã§é€å€ãããŠããå¯èœæ§ããããšæããŸãã ããã§ãšã©ãŒãçºçããŠå€±æããå ŽåïŒ https ïŒ
1ã€ã®ããŒãã2åãã£ã«ã¿ãªã³ã°ãééããŠããããšã«æ°ã¥ããŸããã
ãã°ã¯æ¬¡ã®ãšããã§ãã
I0720 13:58:28.246507 1 generic_scheduler.go:441] Looking for a node for kube-system/coredns-cd64c8d7c-tcxbq, going through []*nodeinfo.NodeInfo{(*nodeinfo.NodeInfo)(0xc000326a90), (*nodeinfo.NodeInfo)(0xc000952000), (*nodeinfo.NodeInfo)(0xc0007d08f0), (*nodeinfo.NodeInfo)(0xc0004f35f0), (*nodeinfo.NodeInfo)(0xc000607040), (*nodeinfo.NodeInfo)(0xc000952000)}
I0720 13:58:28.246793 1 generic_scheduler.go:469] pod kube-system/coredns-cd64c8d7c-tcxbq on node worker-pool1-60846k0y-scheduler, fits: false, status: &v1alpha1.Status{code:3, reasons:[]string{"node(s) didn't match node selector"}}
I0720 13:58:28.246970 1 generic_scheduler.go:483] pod kube-system/coredns-cd64c8d7c-tcxbq on node worker-pool1-60846k0y-scheduler : status is not success
I0720 13:58:28.246819 1 taint_toleration.go:71] Checking taints for pod kube-system/coredns-cd64c8d7c-tcxbq for node master-0-scheduler : taints : []v1.Taint{v1.Taint{Key:"node-role.kubernetes.io/master", Value:"", Effect:"NoSchedule", TimeAdded:(*v1.Time)(nil)}} and tolerations: []v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000d40d90)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000d40db0)}}
I0720 13:58:28.247019 1 taint_toleration.go:71] Checking taints for pod kube-system/coredns-cd64c8d7c-tcxbq for node master-2-scheduler : taints : []v1.Taint{v1.Taint{Key:"node-role.kubernetes.io/master", Value:"", Effect:"NoSchedule", TimeAdded:(*v1.Time)(nil)}} and tolerations: []v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000d40d90)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000d40db0)}}
I0720 13:58:28.247144 1 generic_scheduler.go:469] pod kube-system/coredns-cd64c8d7c-tcxbq on node master-2-scheduler, fits: false, status: &v1alpha1.Status{code:2, reasons:[]string{"node(s) didn't match pod affinity/anti-affinity", "node(s) didn't satisfy existing pods anti-affinity rules"}}
I0720 13:58:28.247172 1 generic_scheduler.go:483] pod kube-system/coredns-cd64c8d7c-tcxbq on node master-2-scheduler : status is not success
I0720 13:58:28.247210 1 generic_scheduler.go:469] pod kube-system/coredns-cd64c8d7c-tcxbq on node worker-pool1-7dt1xd4k-scheduler, fits: false, status: &v1alpha1.Status{code:3, reasons:[]string{"node(s) didn't match node selector"}}
I0720 13:58:28.247231 1 generic_scheduler.go:483] pod kube-system/coredns-cd64c8d7c-tcxbq on node worker-pool1-7dt1xd4k-scheduler : status is not success
I0720 13:58:28.247206 1 generic_scheduler.go:469] pod kube-system/coredns-cd64c8d7c-tcxbq on node worker-pool1-60846k0y-scheduler, fits: false, status: &v1alpha1.Status{code:3, reasons:[]string{"node(s) didn't match node selector"}}
I0720 13:58:28.247297 1 generic_scheduler.go:483] pod kube-system/coredns-cd64c8d7c-tcxbq on node worker-pool1-60846k0y-scheduler : status is not success
I0720 13:58:28.247246 1 generic_scheduler.go:469] pod kube-system/coredns-cd64c8d7c-tcxbq on node worker-pool1-hyk0hg7r-scheduler, fits: false, status: &v1alpha1.Status{code:3, reasons:[]string{"node(s) didn't match node selector"}}
I0720 13:58:28.247340 1 generic_scheduler.go:483] pod kube-system/coredns-cd64c8d7c-tcxbq on node worker-pool1-hyk0hg7r-scheduler : status is not success
I0720 13:58:28.247147 1 generic_scheduler.go:469] pod kube-system/coredns-cd64c8d7c-tcxbq on node master-0-scheduler, fits: false, status: &v1alpha1.Status{code:2, reasons:[]string{"node(s) didn't match pod affinity/anti-affinity", "node(s) didn't satisfy existing pods anti-affinity rules"}}
I0720 13:58:28.247375 1 generic_scheduler.go:483] pod kube-system/coredns-cd64c8d7c-tcxbq on node master-0-scheduler : status is not success
I0720 13:58:28.247420 1 generic_scheduler.go:505] pod kube-system/coredns-cd64c8d7c-tcxbq : processed 5 nodes, 0 fit
I0720 13:58:28.247461 1 generic_scheduler.go:430] pod kube-system/coredns-cd64c8d7c-tcxbq After scheduling, filtered: []*v1.Node{}, filtered nodes: v1alpha1.NodeToStatusMap{"master-0-scheduler":(*v1alpha1.Status)(0xc000d824a0), "master-2-scheduler":(*v1alpha1.Status)(0xc000b736c0), "worker-pool1-60846k0y-scheduler":(*v1alpha1.Status)(0xc000d825a0), "worker-pool1-7dt1xd4k-scheduler":(*v1alpha1.Status)(0xc000b737e0), "worker-pool1-hyk0hg7r-scheduler":(*v1alpha1.Status)(0xc000b738c0)}
I0720 13:58:28.247527 1 generic_scheduler.go:185] Pod kube-system/coredns-cd64c8d7c-tcxbq failed scheduling:
nodes snapshot: &cache.Snapshot{nodeInfoMap:map[string]*nodeinfo.NodeInfo{"master-0-scheduler":(*nodeinfo.NodeInfo)(0xc000607040), "master-1-scheduler":(*nodeinfo.NodeInfo)(0xc0001071e0), "master-2-scheduler":(*nodeinfo.NodeInfo)(0xc000326a90), "worker-pool1-60846k0y-scheduler":(*nodeinfo.NodeInfo)(0xc000952000), "worker-pool1-7dt1xd4k-scheduler":(*nodeinfo.NodeInfo)(0xc0007d08f0), "worker-pool1-hyk0hg7r-scheduler":(*nodeinfo.NodeInfo)(0xc0004f35f0)}, nodeInfoList:[]*nodeinfo.NodeInfo{(*nodeinfo.NodeInfo)(0xc000326a90), (*nodeinfo.NodeInfo)(0xc000952000), (*nodeinfo.NodeInfo)(0xc0007d08f0), (*nodeinfo.NodeInfo)(0xc0004f35f0), (*nodeinfo.NodeInfo)(0xc000607040), (*nodeinfo.NodeInfo)(0xc000952000)}, havePodsWithAffinityNodeInfoList:[]*nodeinfo.NodeInfo{(*nodeinfo.NodeInfo)(0xc000326a90), (*nodeinfo.NodeInfo)(0xc000607040)}, generation:857}
statuses: v1alpha1.NodeToStatusMap{"master-0-scheduler":(*v1alpha1.Status)(0xc000d824a0), "master-2-scheduler":(*v1alpha1.Status)(0xc000b736c0), "worker-pool1-60846k0y-scheduler":(*v1alpha1.Status)(0xc000d825a0), "worker-pool1-7dt1xd4k-scheduler":(*v1alpha1.Status)(0xc000b737e0), "worker-pool1-hyk0hg7r-scheduler":(*v1alpha1.Status)(0xc000b738c0)}
ã芧ã®ãšãããããŒãworker-pool1-60846k0y-schedulerã¯ãã£ã«ã¿ãªã³ã°ã2åå®è¡ããŸã
ãããããã®ãã°ã¯è¡šç€ºãããŸããã§ããã ãŸãã䞊ååã«åé¡ãããããããŒãã以åã«ãã£ã«ã¿ãŒã§é€å€ãããŠããå¯èœæ§ããããšæããŸãã ããã§ãšã©ãŒãçºçããŠå€±æããå ŽåïŒ Nordix @ 5c00cdfïŒdiff -c237cdd9e4cb201118ca380732d7f361R464ãã°afaikã«è¡šç€ºãããã®ã§ãç¹ã«é¢æ°ãšäžŠååã®åšãã«ãããã°ãšã³ããªãè¿œå ããŠã¿ãŸãã
ãããããã§ã®ãšã©ãŒã¯ãããã€ãã³ãã®ã¹ã±ãžã¥ãŒãªã³ã°ãšã©ãŒãšããŠçŸããŸãã
1ã€ã®ããŒãã2åãã£ã«ã¿ãªã³ã°ãééããŠããããšã«æ°ã¥ããŸããã
æ£çŽãªãšããã䞊ååã«ãã°ããããšã¯æããŸãããïŒãŸã ãã§ãã¯ãã䟡å€ããããŸãïŒãããã¯ããã£ãã·ã¥ããã¹ãããã·ã§ãããäœæã§ããªãã£ãããšã瀺ããŠããå¯èœæ§ããããŸãïŒãã£ãã·ã¥ãã³ããããããããã«ããã£ãã·ã¥ã¯æ£ããã§ãïŒãããŒãã2åã ã¹ããŒã¿ã¹ã¯ãããã§ãããããæåŸã®ãã°è¡ã§5ã€ã®ããŒãã®ã¿ãã衚瀺ãããããšã¯çã«ããªã£ãŠããŸãã
ããã¯ã³ãŒãã§ãïŒ1.18ã®ãã³ãïŒ https://github.com/kubernetes/kubernetes/blob/ec73e191f47b7992c2f40fadf1389446d6661d6d/pkg/scheduler/internal/cache/cache.go#L203
cc @ ahg-g
ã¹ã±ãžã¥ãŒã©ãŒã®ãã£ãã·ã¥éšåãç¹ã«ããŒãã®è¿œå ãšæŽæ°ãããã³ã¹ãããã·ã§ããã®åšãã«å€ãã®ãã°ãè¿œå ããããšããŸãã ãã ãããã°ã®æåŸã®è¡ãããã¹ãããã·ã§ãããå®éã«æ£ããããã¹ãŠã®ããŒããå«ãŸããŠããããšãããããŸãããã®ãããåŸã§ãã®ã¹ãããã·ã§ãããåŠçãããšãã«ãäœãèµ·ãã£ãŠãçºçããããã«èŠããŸãã
ãã£ãã·ã¥ïŒ=ã¹ãããã·ã§ãã
ãã£ãã·ã¥ã¯ãã€ãã³ãããæŽæ°ãããçãç©ã§ãã ã¹ãããã·ã§ããã¯ãç¶æ ããããã¯ãããããã«ãåã¹ã±ãžã¥ãŒãªã³ã°ãµã€ã¯ã«ã®åã«ïŒãã£ãã·ã¥ããïŒæŽæ°ãããŸãã ãã®æåŸã®ããã»ã¹ãã§ããã ãéãããããã«æé©åãè¿œå ããŸããã ãã°ãååšããå¯èœæ§ããããŸãã
ããããšã@maelkïŒ ããã¯ãšãŠã䟿å©ã§ãã ãã°ã¯ã䞊åã³ãŒããå®è¡ãããåã«ã (*nodeinfo.NodeInfo)(0xc000952000)
ããã§ã«https://github.com/Nordix/kubernetes/commit/5c00cdf195fa61316f963f59e73c6cafc2ad9bdc#diff-c237cdd9e4cb201118ca380732d7f361R441ã«ãããªã¹ãã«è€è£œãããŠããããšã瀺ããŠã
å®éã«ã¯ãããã¯ã¹ãããã·ã§ããããã®ãã®ã§ããããã®ãã°ã¡ãã»ãŒãžã®åã«çºçããŸãïŒ https ïŒ//github.com/Nordix/kubernetes/commit/5c00cdf195fa61316f963f59e73c6cafc2ad9bdc#diff-c237cdd9e4cb201118ca380732d7f361R161ã ã¹ãããã·ã§ããã®ã³ã³ãã³ãã¯https://github.com/Nordix/kubernetes/commit/5c00cdf195fa61316f963f59e73c6cafc2ad9bdc#diff-c237cdd9e4cb201118ca380732d7f361R436ããååŸãããŠãããããéè€ããŠããããã«èŠã
ãã®ãšããã ã¹ãããã·ã§ããã®æŽæ°ãå®äºããåã«ããã§ã«è€è£œãããŠããããšãæå³ããŸãã
ãã®ãšããã ã¹ãããã·ã§ããã®æŽæ°ãå®äºããåã«ããã§ã«è€è£œãããŠããããšãæå³ããŸãã
ããããã¹ãããã·ã§ããã¯ã¹ã±ãžã¥ãŒãªã³ã°ãµã€ã¯ã«ã®éå§æã«æŽæ°ãããŸãã ãã°ã¯ãã¹ãããã·ã§ããã®æŽæ°äžãŸãã¯ãã®åã«çºçããŸãã ãããã httpsïŒ //github.com/kubernetes/kubernetes/issues/91601#issuecomment -659465008ã®ãã³ãã«ãããšããã£ãã·ã¥ã¯æ£ããã§ãã
ç·šéïŒç§ã¯ãããééã£ãŠèªã¿ãŸãããç§ã¯ãçµããããšããèšèãèŠãŸããã§ãã:)
PRæé©åæŽæ°ã¹ãããã·ã§ããã¯1.18ã§å®è¡ãããŸããïŒ https ïŒ https://github.com/kubernetes/kubernetes/pull/86919
ããŒãããªãŒã«ãéè€ã¬ã³ãŒããããã®ã ããã
ããŒãããªãŒã«ãéè€ã¬ã³ãŒããããã®ã ããã
@maelkãã£ãã·ã¥å ã®ããŒãã®å®å šãªãªã¹ãã®ãã³ãã衚瀺ã§ããŸããïŒ
NodeInfoListããã¢ã€ãã ãè¿œå /åé€ããã®ã§ã¯ãªããããªãŒããå®å šãªãªã¹ããäœæãããã©ããã«ããããããéè€ãããå Žåã¯ãããªãŒããã®ãã®ã§ããå¯èœæ§ãé«ããšæããŸãã
æ確ã«ããããã«ïŒ
1ïŒã¯ã©ã¹ã¿ãŒã«ã¯6ã€ã®ããŒãïŒãã¹ã¿ãŒãå«ãïŒããããŸã
2ïŒãããããã¹ãããããšã«ãªã£ãŠããããŒãããŸã£ãã調ã¹ãããªãã£ãïŒããã瀺ããã°è¡ããªãïŒãã€ãŸããNodeInfoListã«ãŸã£ããå«ãŸããŠããªãå¯èœæ§ããããŸã
3ïŒNodeInfoListã«ã¯6ã€ã®ããŒãããããŸããããã®ãã¡ã®1ã€ãéè€ããŠããŸã
ããŒãããªãŒã«ãéè€ã¬ã³ãŒããããã®ã ããã
@maelkãã£ãã·ã¥å ã®ããŒãã®å®å šãªãªã¹ãã®ãã³ãã衚瀺ã§ããŸããïŒ
åããŒãããªãŒããªã¹ããããã³ãããã®ãã³ãã¯çŽ æŽãããã§ãããã
ãããã®ååŸã«åãçµã¿ãŸãã ãããŸã§ã®éãå°ããªæŽæ°ããããŸãã ãã°ã§ç¢ºèªã§ããŸãïŒ
I0720 13:37:30.530980 1 node_tree.go:100] Removed node "worker-pool1-60846k0y-scheduler" in group "" from NodeTree
I0720 13:37:30.531136 1 node_tree.go:86] Added node "worker-pool1-60846k0y-scheduler" in group "regionOne:\x00:nova" to NodeTree
ãããŠãããã¯æ¬ èœããŠããããŒããæ¶ããæ£ç¢ºãªãã€ã³ãã§ãã ãã°ã®æåŸã®çºçã¯13:37:24ã§ãã 次ã®ã¹ã±ãžã¥ãŒãªã³ã°ã§ã¯ãæ¬ èœããŠããããŒãã¯ãªããªããŸãã ãããã£ãŠããã°ã¯node_treeã®æŽæ°ã«ãã/ followsã«ããããã«èŠããŸãã ãã¹ãŠã®ããŒãããã®æŽæ°ãééããŸãããã®ã¯ãŒã«ãŒ608ãæåŸã«æŽæ°ãééããã ãã§ãã
ãã£ãã·ã¥ãïŒSIGUSR2ã䜿çšããŠïŒãã³ããããšã6ã€ã®ããŒããã¹ãŠããªã¹ãããããããã¯ããŒãäžã§å®è¡ãããéè€ãããããŒããæ¬ èœãããããããšã¯ãããŸããã
ã¹ãããã·ã§ããæ©èœã«é¢ãããããã°ãè¿œå ããŠãæ°ããè©Šè¡ãè¡ããŸãïŒ https ïŒ
ã°ã«ãŒã ""ã®ããŒã "worker-pool1-60846k0y-scheduler"ãNodeTreeããåé€ããŸãã
èå³æ·±ãããšã«ãremove / addã¯updateNodeåŒã³åºãã«ãã£ãŠããªã¬ãŒããããšæããŸãã ãŸãŒã³ããŒã¯åé€æã«æ¬ èœããŠããŸãããè¿œå ã«ã¯ååšãããããæŽæ°ã§ã¯åºæ¬çã«ãŸãŒã³ãšãªãŒãžã§ã³ã®ã©ãã«ãè¿œå ãããŠããŸãããïŒ
ãã®ããŒãã«é¢é£ããä»ã®ã¹ã±ãžã¥ãŒã©ãã°ã¯ãããŸããïŒ
ãã®ã³ã°ãè¿œå ããŠãã°ãåçŸããããšããŠããŸãã 詳现ãããã次第ããŸãæ»ã£ãŠããŸã
ãããã®ååŸã«åãçµã¿ãŸãã ãããŸã§ã®éãå°ããªæŽæ°ããããŸãã ãã°ã§ç¢ºèªã§ããŸãïŒ
I0720 13:37:30.530980 1 node_tree.go:100] Removed node "worker-pool1-60846k0y-scheduler" in group "" from NodeTree I0720 13:37:30.531136 1 node_tree.go:86] Added node "worker-pool1-60846k0y-scheduler" in group "regionOne:\x00:nova" to NodeTree
ãã®ãããªããŒããç¹°ãè¿ãããããŒãã§ããããšãææããŠãããŸãã @maelk ãä»ã®ããŒãã§ãåæ§ã®ã¡ãã»ãŒãžã衚瀺ãããŸãããããããšããŸã£ãã衚瀺ãããŸããã§ãããïŒ @ ahg-gã®ããã«ãããã¯ããŒãããã®ããããžã©ãã«ãåããŠåä¿¡ãããšãã«äºæãããã¯ãã§ãã
ã¯ããããã¯ãã¹ãŠã®ããŒãã§çºçããŸããããããŠããã¯äºæ³ãããŸãã å¶ç¶ã®äžèŽã¯ããã®ããŒããå ·äœçã«æåŸã«æŽæ°ãããããŒãã§ãããä»ã®ããŒãã倱ãããã®ã¯ãã®æ£ç¢ºãªæéã§ãã
æ¬ èœããŠããããŒãã®æŽæ°ãã°ãååŸããŸãããïŒ
æ¬ èœããŠããããŒãã®æŽæ°ãã°ãååŸããŸãããïŒ
ç¬ããã®è³ªåãå ¥åããŠããŸããã
ãããããã°ã¯ããã¹ãŠã®ããŒããåé€ãããåã«ããŸãŒã³å šäœãããªãŒããåé€ãããããšã§ãã
æ確ã«ããããã«ãç§ã¯å人çã«ã³ãŒããèŠãŠããã®ã§ã¯ãªãããã¹ãŠã®æ å ±ãããããšã確èªããããšããŠããã ãã§ãã ãããŠãç§ãã¡ãä»æã£ãŠãããã®ã§ãç§ãã¡ã¯ãã°ãèŠã€ããããšãã§ããã¯ãã ãšæããŸãã 倱æããåäœãã¹ããæäŸã§ããå Žåã¯ãPRãèªç±ã«éä¿¡ããŠãã ããã
æ¬ èœããŠããããŒãã®æŽæ°ãã°ãååŸããŸãããïŒ
ã¯ãããã®æ¬ èœããŠããããŒãã®ãŸãŒã³ãæŽæ°ãããŠããããšã瀺ããŠããŸãã ãã¹ãŠã®ããŒãã®ãã°ãšã³ããªããããŸã
æ£çŽãªãšããããã°ã®åå ã¯ãŸã ããããŸããããåé¡ãå€æããå Žåã¯ãPRãŸãã¯åäœãã¹ããæåºããŸãã
ã¯ãããã®æ¬ èœããŠããããŒãã®ãŸãŒã³ãæŽæ°ãããŠããããšã瀺ããŠããŸãã ãã¹ãŠã®ããŒãã®ãã°ãšã³ããªããããŸã
ãããããªããããããæ¬ èœããŠããããŒããæ¶ããæ£ç¢ºãªãã€ã³ããã§ãããšä»®å®ããŸãã çžé¢ããŠããªãå¯èœæ§ããããŸãã æ°ãããã°ãåŸ ã¡ãŸãããã ãã¡ã€ã«ã§ååŸãããã¹ãŠã®ã¹ã±ãžã¥ãŒã©ãã°ãå ±æã§ãããšäŸ¿å©ã§ãã
æ°ãããã®ã³ã°ã§åçŸãããšãã«è¡ããŸãã æ¢åã®ãã®ãããå®éã«ã¯ããã®æŽæ°çŽåŸã®ãããã¹ã±ãžã¥ãŒãªã³ã°ãæåã«å€±æããããšãããããŸãã ãããããã®éã«äœãèµ·ãã£ãã®ããç¥ãã®ã«ååãªæ å ±ãåŸãããªãã®ã§ããã°ãããåŸ ã¡ãã ãã...
@maelkã¹ã±ãžã¥ãŒã©ãã°ã§snapshot state is not consistent
ã§å§ãŸãã¡ãã»ãŒãžãèŠãããšããããŸããïŒ
å®å šãªã¹ã±ãžã¥ãŒã©ãã°ãæäŸããããšã¯å¯èœã§ããïŒ
ãããããã®ã¡ãã»ãŒãžã¯ååšããŸããã ïŒç¹°ãè¿ããé¿ããããã«ïŒã¹ãã©ã€ãããŠã³ããããã°ãã¡ã€ã«ãæäŸããããšãã§ããŸãããæåã«ãã¹ãããã·ã§ããã®åšãã«ããã«ãã°ãå«ãŸããåºåãåŸããããŸã§åŸ ã¡ãŸãããã
ãã°ãèŠã€ããŸããã åé¡ã¯nodeTreenextïŒïŒé¢æ°ã«ãããå Žåã«ãã£ãŠã¯ãã¹ãŠã®ããŒãã®ãªã¹ããè¿ããªãããšããããŸãã https://github.com/kubernetes/kubernetes/blob/release-1.18/pkg/scheduler/internal/cache/node_tree.go#L147
ããã«ä»¥äžãè¿œå ãããšè¡šç€ºãããŸãïŒ https ïŒ
{
name: "add nodes to a new and to an exhausted zone",
nodesToAdd: append(allNodes[5:9], allNodes[3]),
nodesToRemove: nil,
operations: []string{"add", "add", "next", "next", "add", "add", "add", "next", "next", "next", "next"},
expectedOutput: []string{"node-6", "node-7", "node-3", "node-8", "node-6", "node-7"},
},
äž»ãªåé¡ã¯ãããŒããè¿œå ãããšãäžéšã®ãŸãŒã³ã®ã€ã³ããã¯ã¹ã0ã«ãªããªãããšã§ãã ãããè¡ãã«ã¯ãå°ãªããšã2ã€ã®ãŸãŒã³ãå¿ èŠã§ããäžæ¹ã¯ä»æ¹ãããçããé·ããŸãŒã³ã¯ã次ã®é¢æ°ãåããŠåŒã³åºããšãã«ã€ã³ããã¯ã¹ã0ã«èšå®ãããŠããŸããã
ç§ãè¡ã£ãä¿®æ£ã¯ãnextïŒïŒãæåã«åŒã³åºãåã«ã€ã³ããã¯ã¹ããªã»ããããããšã§ãã ä¿®æ£ã瀺ãããã«PRãéããŸããã ãã¡ãããããã¯ç§ãåãçµãã§ãããã®ã§ããããã1.18ãªãªãŒã¹ã«ã¯å察ã§ããããããä¿®æ£ããæ¹æ³ïŒãŸãã¯nextïŒïŒé¢æ°èªäœãä¿®æ£ããæ¹æ³ïŒãè°è«ããããã®ãã®ã§ãã ãã¹ã¿ãŒã«å¯ŸããŠé©åãªPRãéããåŸã§å¿ èŠã«å¿ããŠããã¯ããŒããå®è¡ã§ããŸãã
å埩ã§åãåé¡ã«æ°ã¥ããŸããã ãããããããã¹ãããã·ã§ããã®éè€ã«ãªã³ã¯ã§ããŸããã§ããã @maelkããããèµ·ããã·ããªãªããªããšãäœæã§ããŸãããïŒ
ã¯ããç§ã眮ããå°ããªã³ãŒããè¿œå ããããšã§ãåäœãã¹ãã§å®è¡ã§ããŸã
çŸåšãã¹ãããã·ã§ããã®ãã¹ãã±ãŒã¹ãè¿œå ããŠããããé©åã«ãã¹ããããŠããããšã確èªããŠããŸãã
åé¡ãåçŸãã圌ã®ã»ããã¢ããã§ãã¹ããå®è¡ããã®ã«åœ¹ç«ã€@igraecaoã«å€§ãã«æè¬ããŸã
ãã®æªåé«ãåé¡ããããã°ããŠãããŠããããšãã ãªã¹ããäœæããåã«ã€ã³ããã¯ã¹ããªã»ããããã®ã¯å®å šãªã®ã§ã1.18ããã³1.19ãããã§ãã€ã³ããã¯ã¹ããªã»ãããããã¹ã¿ãŒãã©ã³ãã§é©åã«ä¿®æ£ããå¿ èŠããããšæããŸãã
next
é¢æ°ã®ç®çã¯ãNodeInfoListã®å°å
¥ã«ãã£ãŠå€æŽãããããã確å®ã«ç°¡ç¥åããŠãããªãŒãããªã¹ããäœæããŠéå§ããã ãã®é¢æ°ã§ããtoList
ã«å€æŽããããšãã§ããŸããæ¯åæåããã
ç§ã¯ä»åé¡ãç解ããŠããŸãïŒãŸãŒã³ã䜿ãæããããŠãããã©ããã®èšç®ã¯ãåãŸãŒã³ã®ã©ãã§ãã®ãUpdateSnapshotãããã»ã¹ãéå§ããããèæ ®ããŠããªããããééã£ãŠããŸãã ãããŠããããããã¯äžåäžãªãŸãŒã³ã§ã®ã¿è¡šç€ºãããŸãã
ãã®@maelkãèŠã€ããã®ã¯çŽ æŽãããä»äºã§ãïŒ
å€ãããŒãžã§ã³ã§ãåãåé¡ããããšæããŸãã ããããããã¯ç§ãã¡ãæ¯åããªãŒãã¹ãè¡ããšããäºå®ã«ãã£ãŠé ãããŠããŸãã äžæ¹ã1.18ã§ã¯ãããªãŒã«å€æŽãå ãããããŸã§çµæã®ã¹ãããã·ã§ãããäœæããŸãã
ã©ãŠã³ãããã³æŠç¥ãgeneric_scheduler.goã«å®è£ ãããã®ã§ãPRãè¡ã£ãŠããããã«ãUpdateSnapshotã®åã«ãã¹ãŠã®ã«ãŠã³ã¿ãŒããªã»ããããã ãã§åé¡ãªããããããŸããã
@ ahg-gãå確èªããã ãã§ãæ°ããããŒããåžžã«è¿œå /åé€ãããŠããã¯ã©ã¹ã¿ãŒã§ãåé¡ãªãã¯ãã§ãããïŒ
æ ¹æ¬åå ãèŠã€ããŠããã@maelkã«æè¬ããŸãïŒ
次ã®é¢æ°ã®ç®çã¯NodeInfoListã®å°å ¥ã«ãã£ãŠå€æŽãããããã確å®ã«ç°¡ç¥åããŠãããªãŒãããªã¹ããäœæããæ¯åæåããéå§ããé¢æ°ã§ããtoListã«å€æŽããããšãã§ããŸãã
cache.nodeTree.next()
ã¯ã¹ãããã·ã§ããnodeInfoListã®æ§ç¯æã«ã®ã¿åŒã³åºãããããšãèãããšãnodeTreeæ§é äœããã€ã³ããã¯ã¹ïŒzoneIndexãšnodeIndexã®äž¡æ¹ïŒãåé€ããããšãå®å
šã ãšæããŸãã 代ããã«ãã©ãŠã³ãããã³æ¹åŒã§ãŸãŒã³/ããŒããå埩åŠçããåçŽãªnodeIterator()
é¢æ°ãèãåºããŸãã
ãšããã§ïŒ httpsïŒ //github.com/kubernetes/kubernetes/issues/91601#issuecomment -662663090ã«ã¿ã€ããã¹ããããã±ãŒã¹ã¯æ¬¡ã®ããã«ãªããŸãã
{
name: "add nodes to a new and to an exhausted zone",
nodesToAdd: append(allNodes[6:9], allNodes[3]),
nodesToRemove: nil,
operations: []string{"add", "add", "next", "next", "add", "add", "next", "next", "next", "next"},
expectedOutput: []string{"node-6", "node-7", "node-3", "node-8", "node-6", "node-7"},
// with codecase on master and 1.18, its output is [node-6 node-7 node-3 node-8 node-6 node-3]
},
@ ahg-gãå確èªããã ãã§ãæ°ããããŒããåžžã«è¿œå /åé€ãããŠããã¯ã©ã¹ã¿ãŒã§ãåé¡ãªãã¯ãã§ãããïŒ
generic_scheduler.goã®ããžãã¯ã«ã€ããŠè©±ããŠãããšä»®å®ããŸããããã§ããã°ãããŒããè¿œå ããããåé€ããããã¯ããã»ã©éèŠã§ã¯ãããŸãããé¿ããå¿ èŠãããäž»ãªããšã¯ãæ¯ååãé åºã§ããŒããå埩ããããšã§ããããããã¹ã±ãžã¥ãŒã«ããŸãããããéã§ããŒããå埩åŠçããããã®é©åãªæŠç®ãå¿ èŠã§ãã
cache.nodeTree.nextïŒïŒã¯ã¹ãããã·ã§ããnodeInfoListã®æ§ç¯æã«ã®ã¿åŒã³åºããããããnodeTreeæ§é äœããã€ã³ããã¯ã¹ïŒzoneIndexãšnodeIndexã®äž¡æ¹ïŒãåé€ããŠãå®å šã ãšæããŸãã 代ããã«ãã©ãŠã³ãããã³æ¹åŒã§ãŸãŒã³/ããŒããå埩åŠçããåçŽãªnodeIteratorïŒïŒé¢æ°ãèãåºããŸãã
ã¯ããæ¯ååãé åºã§ãã¹ãŠã®ãŸãŒã³/ããŒããå埩ããå¿ èŠããããŸãã
ç¹ã«ãã®ãã°ã®ããã«ãã¹ãããã·ã§ãããªã¹ããæŽæ°ããæ©èœã®åäœãã¹ãã§PRãæŽæ°ããŸããã ãŸããnextïŒïŒé¢æ°ããªãã¡ã¯ã¿ãªã³ã°ããŠãã©ãŠã³ãããã³ãªãã§ãŸãŒã³ãšããŒããå埩åŠçã§ãããããåé¡ã解æ¶ãããŸãã
ãããã§ãè¯ãããã«èãããŸãããããã§ãçŸåšãšåãããã«ãã€ãŸãèšèšäžããŸãŒã³éãç¹°ãè¿ãå¿ èŠããããŸãã
ç§ã¯ããªããããã§æå³ããããšãæ¬åœã«ç解ããŠããŸããã ããŒãã®é åºãéèŠã§ããããŸãŒã³éãã©ãŠã³ãããã³ããå¿ èŠãããããã§ããããããšããŸãŒã³ã®ãã¹ãŠã®ããŒãããŸãŒã³ããšã«äžèŠ§è¡šç€ºã§ããŸããïŒ ãããã2ã€ã®ããŒããããªã2ã€ã®ãŸãŒã³ãããããã®é åºã§ããããæåŸ ããŠãããšããŸãããããããšããŸã£ããåé¡ã§ã¯ãããŸãããïŒ
é åºã¯éèŠã§ãããªã¹ããäœæãããšãã«ãŸãŒã³ãåãæ¿ããå¿
èŠããããŸãã ããããz1: {n11, n12}
ãšz2: {n21, n22}
ã®2ã€ã®ããŒããããªã2ã€ã®ãŸãŒã³ãããå Žåããªã¹ãã¯{n11, n21, n12, n22}
ããããŸãããããããšããç§ã¯ããã«èããŸãã ãã®éãã¯ã€ãã¯ãã£ãã¯ã¹ãç¶è¡ã§ããŸããïŒ ãšããã§ãããã€ãã®ãã¹ãã¯ããã«å€±æããŠããŸããããããç§ã®PRã«ã©ã®ããã«é¢é£ããŠãããããããŸãã
ãããã¯ãã¬ãŒã¯ã§ãã 1.18ã«ãããããéã£ãŠãã ããã
ããããŸããã ããããšã
{ name: "add nodes to a new and to an exhausted zone", nodesToAdd: append(allNodes[5:9], allNodes[3]), nodesToRemove: nil, operations: []string{"add", "add", "next", "next", "add", "add", "add", "next", "next", "next", "next"}, expectedOutput: []string{"node-6", "node-7", "node-3", "node-8", "node-6", "node-7"}, },
@maelk ããã®ãã¹ãã¯ãããŒã5ããç¡èŠãããšããæå³ã§ããïŒ
https://github.com/kubernetes/kubernetes/pull/93516ã§è¿œå ãä¿®æ£ããåŸããã¹ãŠã®ããŒããç¹°ãè¿ãããšãã§ãããã¹ãçµæãèŠã€ããŸããã
{
name: "add nodes to a new and to an exhausted zone",
nodesToAdd: append(append(make([]*v1.Node, 0), allNodes[5:9]...), allNodes[3]),
nodesToRemove: nil,
operations: []string{"add", "add", "next", "next", "add", "add", "add", "next", "next", "next", "next"},
expectedOutput: []string{"node-5", "node-6", "node-3", "node-7", "node-8", "node-5"},
},
ããŒã-5ã6ã7ã8ã3ãç¹°ãè¿ãããšãã§ããŸãã
ããã§äœãã誀解ããå Žåã¯ã容赊ãã ããã
ã¯ããããã¯ããã«ãã£ããã®ã«åºã¥ããŠæå³çã«è¡ãããŸãããããããã©ã®ããã«äžå¯è§£ã§ããããããããŸãããããã£ãŠãè¿œå ãããæ確ã«åäœããããã«äœæããããšããå§ãããŸãã ããããããããšãã
ãã®ãã°ã¯ã©ããããåã«ååšãããšæããŸããïŒ 1.17ïŒ 1.16ïŒ AWSã®1.17ã§ãŸã£ããåãåé¡ãçºçããã¹ã±ãžã¥ãŒã«ãããŠããªãããŒããåèµ·åãããšåé¡ãä¿®æ£ãããŸããã
@judgeaxl詳现ã
https://github.com/kubernetes/kubernetes/issues/91601#issuecomment -662746695ã§è¿°ã¹ãããã«ããã®ãã°ã¯å€ãããŒãžã§ã³ã«ååšããŠãããšæããŸãããäžæçãªãã®ã ãšæããŸãã
@maelk調æ»ã§ããŸããïŒ
ãŸãŒã³å ã®ããŒãã®ååžãå ±æããŠãã ããã
@alculquicondoræ®å¿µãªããã
@alculquicondorç³ãèš³ãããŸããããä»ã®çç±ã§ãã§ã«ã¯ã©ã¹ã¿ãŒãåæ§ç¯ããŸãããã
/ retitleãŸãŒã³ã®äžåè¡¡ãããå Žåãäžéšã®ããŒãã¯ã¹ã±ãžã¥ãŒãªã³ã°ã§èæ ®ãããŸãã
æãåèã«ãªãã³ã¡ã³ã
çŸåšãã¹ãããã·ã§ããã®ãã¹ãã±ãŒã¹ãè¿œå ããŠããããé©åã«ãã¹ããããŠããããšã確èªããŠããŸãã