ãªãœãŒã¹äžè¶³ã®ããã«ããããã¹ã±ãžã¥ãŒã«ã§ããªãããããŠãŒã¶ãŒã¯ã€ãŸãããŸãã ãããããŸã èµ·åããŠããªãããããŸãã¯ã¯ã©ã¹ã¿ã«ã¹ã±ãžã¥ãŒã«ããäœå°ããªããããããããä¿çã«ãªã£ãŠããããšãç¥ãã®ã¯é£ããå ŽåããããŸãã http://kubernetes.io/v1.1/docs/user-guide/compute-resources.html#monitoring -compute-resource-usageã¯åœ¹ç«ã¡ãŸãããããã»ã©çºèŠå¯èœã§ã¯ãããŸããïŒç§ã¯ãããããæåã«ä¿çäžã§ãããã°ããåŸ ã£ãŠãä¿çäžã®ãã¹ã¿ãã¯ãã確èªããåŸã§ã®ã¿ããdescribeãã䜿çšããŠãã¹ã±ãžã¥ãŒãªã³ã°ã®åé¡ã§ããããšãèªèããŸããïŒ
ããã¯ãã·ã¹ãã ããããé衚瀺ã®åå空éã«ããããšã«ãã£ãŠãè€éã«ãªããŸãã ãŠãŒã¶ãŒã¯ãããã®ããããååšããããšãå¿ããã¯ã©ã¹ã¿ãŒãªãœãŒã¹ã«å¯ŸããŠãã«ãŠã³ããããŸãã
æå ã«ããã€ãã®å¯èœãªä¿®æ£ããããŸããç§ã¯äœãçæ³çã§ãããããããŸããïŒ
1ïŒãã¹ã±ãžã¥ãŒã«ãè©Šè¡ãããªãœãŒã¹äžè¶³ã®ãã倱æãããããšãè¡šãããã«ãä¿çäžä»¥å€ã®æ°ãããããç¶æ ãéçºããŸãã
2ïŒkubectl getpoãŸãã¯kubectlget po -o = wideã«ãä¿çäžã®çç±ïŒãã®å Žåã¯åŸ æ©äžã®container.stateããŸãã¯ææ°ã®event.messageïŒã®è©³çŽ°ã瀺ãåã衚瀺ãããŸãã
3ïŒæ°ããkubectlã³ãã³ããäœæããŠããªãœãŒã¹ãããç°¡åã«èšè¿°ããŸãã ç§ã¯ãããŒãããšã®CPUãšMemãããã³åããã/ã³ã³ãããŒã®äœ¿çšéã«ã€ããŠãã¯ã©ã¹ã¿ãŒã®CPUãšMemã®åèšã®æŠèŠã瀺ããkubectlã®äœ¿çšæ³ããæ³åããŠããŸãã ããã«ã¯ãã·ã¹ãã ããããå«ããã¹ãŠã®ããããå«ãŸããŸãã ããã¯ãããè€éãªã¹ã±ãžã¥ãŒã©ãŒãšäžç·ã«ããŸãã¯ã¯ã©ã¹ã¿ãŒã«ååãªãªãœãŒã¹ããããåäžããŒãã«ã¯ãªãå ŽåïŒãååãªå€§ããã®ããŒã«ããªããåé¡ã®èšºæïŒã«é·æçã«åœ¹ç«ã€å¯èœæ§ããããŸãã
UXã®äººã ã¯ç§ãããããç¥ã£ãŠããã§ãããããïŒ2ïŒã®ç·ã«æ²¿ã£ãäœãã¯åççãªããã§ãã
ïŒ3ïŒã¯æŒ ç¶ãšïŒ15743ã«é¢é£ããŠããããã«èŠããŸãããããããçµã¿åãããã®ã«ååè¿ããã©ããã¯ããããŸããã
äžèšã®å Žåã«å ããŠãã©ã®ãããªãªãœãŒã¹äœ¿çšçãåŸãããŠãããã確èªãããšäŸ¿å©ã§ãã
kubectl utilization requests
ã衚瀺ãããå ŽåããããŸãïŒããããkubectl util
ãŸãã¯kubectl usage
æ¹ãè¯ã/çãã§ãïŒïŒ
cores: 4.455/5 cores (89%)
memory: 20.1/30 GiB (67%)
...
ãã®äŸã§ã¯ãéçŽã³ã³ãããŒèŠæ±ã¯4.455ã³ã¢ãš20.1 GiBã§ãããã¯ã©ã¹ã¿ãŒã«ã¯5ã³ã¢ãšåèš30GiBããããŸãã
ãããïŒ
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
cluster1-k8s-master-1 312m 15% 1362Mi 68%
cluster1-k8s-node-1 124m 12% 233Mi 11%
以äžã®ã³ãã³ãã䜿çšããŠããªãœãŒã¹ã®äœ¿çšç¶æ³ããã°ãã確èªããŸãã ããã¯ç§ãèŠã€ããæãç°¡åãªæ¹æ³ã§ãã
kubectl describe nodes
kubectl describe nodes
ã®åºåãããã©ãŒããããããæ¹æ³ãããã°ããã¹ãŠã®ããŒãã®ãªãœãŒã¹èŠæ±/å¶éãèŠçŽããæ¹æ³ãã¹ã¯ãªããã§èšè¿°ããŠãããŸããŸããã
ãããç§ã®ããã¯ã§ãkubectl describe nodes | grep -A 2 -e "^\\s*CPU Requests"
@ from-ã©ããããããšããç§ãæ¢ããŠãããã®ã ã
ãããããã¯ç§ã®ãã®ã§ãïŒ
$ cat bin/node-resources.sh
#!/bin/bash
set -euo pipefail
echo -e "Iterating...\n"
nodes=$(kubectl get node --no-headers -o custom-columns=NAME:.metadata.name)
for node in $nodes; do
echo "Node: $node"
kubectl describe node "$node" | sed '1,/Non-terminated Pods/d'
echo
done
@goltermannãã®åé¡ã«é¢ããsigã©ãã«ã¯ãããŸããã 次ã®æ¹æ³ã§sigã©ãã«ãè¿œå ããŠãã ããã
ïŒ1ïŒsigã«èšåããïŒ @kubernetes/sig-<team-name>-misc
ïŒ2ïŒã©ãã«ãæåã§æå®ããïŒ /sig <label>
_泚ïŒæ¹æ³ïŒ1ïŒã¯ãããŒã ãžã®éç¥ãããªã¬ãŒããŸãã ããã§ããŒã ãªã¹ããèŠã€ããããšãã§ããŸãã_
@ kubernetes / sig-cli-misc
以äžã®ã³ãã³ãã䜿çšããŠãããŒãã®CPU䜿çšçã確èªã§ããŸãã
alias util='kubectl get nodes | grep node | awk '\''{print $1}'\'' | xargs -I {} sh -c '\''echo {} ; kubectl describe node {} | grep Allocated -A 5 | grep -ve Event -ve Allocated -ve percent -ve -- ; echo '\'''
Note: 4000m cores is the total cores in one node
alias cpualloc="util | grep % | awk '{print \$1}' | awk '{ sum += \$1 } END { if (NR > 0) { result=(sum**4000); printf result/NR \"%\n\" } }'"
$ cpualloc
3.89358%
Note: 1600MB is the total cores in one node
alias memalloc='util | grep % | awk '\''{print $3}'\'' | awk '\''{ sum += $1 } END { if (NR > 0) { result=(sum*100)/(NR*1600); printf result/NR "%\n" } }'\'''
$ memalloc
24.6832%
@tomfotherby alias util='kubectl get nodes | grep node | awk '\''{print $1}'\'' | xargs -I {} sh -c '\''echo {} ; kubectl describe node {} | grep Allocated -A 5 | grep -ve Event -ve Allocated -ve percent -ve -- ; echo '\'''
@ alok87-ãšã€ãªã¢ã¹ãããããšãã ç§ã®å Žåã bash
ãšm3.large
ã€ã³ã¹ã¿ã³ã¹ã¿ã€ãïŒ2 cpuã7.5Gã¡ã¢ãªïŒã䜿çšããŠããããšãèãããšããããããŸããããŸããã
alias util='kubectl get nodes --no-headers | awk '\''{print $1}'\'' | xargs -I {} sh -c '\''echo {} ; kubectl describe node {} | grep Allocated -A 5 | grep -ve Event -ve Allocated -ve percent -ve -- ; echo '\'''
# Get CPU request total (we x20 because because each m3.large has 2 vcpus (2000m) )
alias cpualloc='util | grep % | awk '\''{print $1}'\'' | awk '\''{ sum += $1 } END { if (NR > 0) { print sum/(NR*20), "%\n" } }'\'''
# Get mem request total (we x75 because because each m3.large has 7.5G ram )
alias memalloc='util | grep % | awk '\''{print $5}'\'' | awk '\''{ sum += $1 } END { if (NR > 0) { print sum/(NR*75), "%\n" } }'\'''
$util
ip-10-56-0-178.ec2.internal
CPU Requests CPU Limits Memory Requests Memory Limits
960m (48%) 2700m (135%) 630Mi (8%) 2034Mi (27%)
ip-10-56-0-22.ec2.internal
CPU Requests CPU Limits Memory Requests Memory Limits
920m (46%) 1400m (70%) 560Mi (7%) 550Mi (7%)
ip-10-56-0-56.ec2.internal
CPU Requests CPU Limits Memory Requests Memory Limits
1160m (57%) 2800m (140%) 972Mi (13%) 3976Mi (53%)
ip-10-56-0-99.ec2.internal
CPU Requests CPU Limits Memory Requests Memory Limits
804m (40%) 794m (39%) 824Mi (11%) 1300Mi (17%)
cpualloc
48.05 %
$ memalloc
9.95333 %
https://github.com/kubernetes/kubernetes/issues/17512#issuecomment -267992922 kubectl top
ã¯ãå²ãåœãŠã§ã¯ãªã䜿çšç¶æ³ã瀺ããŠãinsufficient CPU
åé¡ã®åå ã§ãã ãã®åé¡ã«ã¯ãéãã«ã€ããŠå€ãã®æ··ä¹±ããããŸãã
AFAICTããªã¯ãšã¹ãã¯ä»æ§ã®ã³ã³ããããšã§ããããããããããšã®ããŒãCPUå²ãåœãŠã®ã¬ããŒããååŸããç°¡åãªæ¹æ³ã¯ãããŸããã ããã§ãã .spec.containers[*].requests
ã¯limits
/ requests
ãã£ãŒã«ããããå Žåãšãªãå Žåããããããå°é£ã§ãïŒç§ã®çµéšã§ã¯ïŒ
/ cc @mysterikkit
ãã®ã·ã§ã«ã¹ã¯ãªããããŒãã£ãŒã«åå ããŸãã ã¹ã±ãŒã«ããŠã³ãç¡å¹ã«ããŠCAãå®è¡ããŠããå€ãã¯ã©ã¹ã¿ãŒããããŸãã ãã®ã¹ã¯ãªãããäœæããŠãAWSã«ãŒãå¶éã«ã¶ã€ããå§ãããšãã«ã¯ã©ã¹ã¿ãŒãã¹ã±ãŒã«ããŠã³ã§ããããããã®éã決å®ããŸããã
#!/bin/bash
set -e
KUBECTL="kubectl"
NODES=$($KUBECTL get nodes --no-headers -o custom-columns=NAME:.metadata.name)
function usage() {
local node_count=0
local total_percent_cpu=0
local total_percent_mem=0
local readonly nodes=$@
for n in $nodes; do
local requests=$($KUBECTL describe node $n | grep -A2 -E "^\\s*CPU Requests" | tail -n1)
local percent_cpu=$(echo $requests | awk -F "[()%]" '{print $2}')
local percent_mem=$(echo $requests | awk -F "[()%]" '{print $8}')
echo "$n: ${percent_cpu}% CPU, ${percent_mem}% memory"
node_count=$((node_count + 1))
total_percent_cpu=$((total_percent_cpu + percent_cpu))
total_percent_mem=$((total_percent_mem + percent_mem))
done
local readonly avg_percent_cpu=$((total_percent_cpu / node_count))
local readonly avg_percent_mem=$((total_percent_mem / node_count))
echo "Average usage: ${avg_percent_cpu}% CPU, ${avg_percent_mem}% memory."
}
usage $NODES
次ã®ãããªåºåãçæããŸãã
ip-REDACTED.us-west-2.compute.internal: 38% CPU, 9% memory
...many redacted lines...
ip-REDACTED.us-west-2.compute.internal: 41% CPU, 8% memory
ip-REDACTED.us-west-2.compute.internal: 61% CPU, 7% memory
Average usage: 45% CPU, 15% memory.
topã³ãã³ãã«ã¯ããããªãã·ã§ã³ããããŸãã
kubectl top pod
@ylogx https://github.com/kubernetes/kubernetes/issues/17512#issuecomment -326089708
ã¯ã©ã¹ã¿å šäœã§å²ãåœãŠãååŸããç§ã®æ¹æ³ïŒ
$ kubectl get po --all-namespaces -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]} {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}"
次ã®ãããªãã®ãçæãããŸãã
kube-system:heapster-v1.5.0-dc8df7cc9-7fqx6
heapster:88m
heapster-nanny:50m
kube-system:kube-dns-6cdf767cb8-cjjdr
kubedns:100m
dnsmasq:150m
sidecar:10m
prometheus-to-sd:
kube-system:kube-dns-6cdf767cb8-pnx2g
kubedns:100m
dnsmasq:150m
sidecar:10m
prometheus-to-sd:
kube-system:kube-dns-autoscaler-69c5cbdcdd-wwjtg
autoscaler:20m
kube-system:kube-proxy-gke-cluster1-default-pool-cd7058d6-3tt9
kube-proxy:100m
kube-system:kube-proxy-gke-cluster1-preempt-pool-57d7ff41-jplf
kube-proxy:100m
kube-system:kubernetes-dashboard-7b9c4bf75c-f7zrl
kubernetes-dashboard:50m
kube-system:l7-default-backend-57856c5f55-68s5g
default-http-backend:10m
kube-system:metrics-server-v0.2.0-86585d9749-kkrzl
metrics-server:48m
metrics-server-nanny:5m
kube-system:tiller-deploy-7794bfb756-8kxh5
tiller:10m
ããã¯å€ã§ãã å²ãåœãŠå®¹éã«éãããšãããŸãã¯å²ãåœãŠå®¹éã«è¿ã¥ãããšããç¥ãããã ããã¯ãã¯ã©ã¹ã¿ãŒã®ããªãåºæ¬çãªæ©èœã®ããã§ãã é«ãïŒ ãŸãã¯ããã¹ããšã©ãŒã瀺ãçµ±èšã§ãããã©ãã...ä»ã®äººã¯ãããã©ã®ããã«ç¥ã£ãŠããŸããïŒ ã¯ã©ãŠããã©ãããã©ãŒã ã§åžžã«èªåã¹ã±ãŒãªã³ã°ã䜿çšããŸããïŒ
https://github.com/dpetzold/kube-resource-explorer/ãã¢ãã¬ã¹ïŒ3ã«äœæããŸããã 次ã«åºåäŸã瀺ããŸãã
$ ./resource-explorer -namespace kube-system -reverse -sort MemReq
Namespace Name CpuReq CpuReq% CpuLimit CpuLimit% MemReq MemReq% MemLimit MemLimit%
--------- ---- ------ ------- -------- --------- ------ ------- -------- ---------
kube-system event-exporter-v0.1.7-5c4d9556cf-kf4tf 0 0% 0 0% 0 0% 0 0%
kube-system kube-proxy-gke-project-default-pool-175a4a05-mshh 100m 10% 0 0% 0 0% 0 0%
kube-system kube-proxy-gke-project-default-pool-175a4a05-bv59 100m 10% 0 0% 0 0% 0 0%
kube-system kube-proxy-gke-project-default-pool-175a4a05-ntfw 100m 10% 0 0% 0 0% 0 0%
kube-system kube-dns-autoscaler-244676396-xzgs4 20m 2% 0 0% 10Mi 0% 0 0%
kube-system l7-default-backend-1044750973-kqh98 10m 1% 10m 1% 20Mi 0% 20Mi 0%
kube-system kubernetes-dashboard-768854d6dc-jh292 100m 10% 100m 10% 100Mi 3% 300Mi 11%
kube-system kube-dns-323615064-8nxfl 260m 27% 0 0% 110Mi 4% 170Mi 6%
kube-system fluentd-gcp-v2.0.9-4qkwk 100m 10% 0 0% 200Mi 7% 300Mi 11%
kube-system fluentd-gcp-v2.0.9-jmtpw 100m 10% 0 0% 200Mi 7% 300Mi 11%
kube-system fluentd-gcp-v2.0.9-tw9vk 100m 10% 0 0% 200Mi 7% 300Mi 11%
kube-system heapster-v1.4.3-74b5bd94bb-fz8hd 138m 14% 138m 14% 301856Ki 11% 301856Ki 11%
@shtouff
root<strong i="7">@debian9</strong>:~# kubectl get po -n chenkunning-84 -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]} {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}"
error: error parsing jsonpath {range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]} {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}, unrecognized character in action: U+0027 '''
root<strong i="8">@debian9</strong>:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.7-beta.0+$Format:%h$", GitCommit:"bb053ff0cb25a043e828d62394ed626fda2719a1", GitTreeState:"dirty", BuildDate:"2017-08-26T09:34:19Z", GoVersion:"go1.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.7-beta.0+$Format:84c3ae0384658cd40c1d1e637f5faa98cf6a965c$", GitCommit:"3af2004eebf3cbd8d7f24b0ecd23fe4afb889163", GitTreeState:"clean", BuildDate:"2018-04-04T08:40:48Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"linux/amd64"}
@ harryge00 ïŒU + 0027ã¯ã¢ã³ãããŒã¹ãã®åé¡ã§ã
@nfirvineããããšãïŒ ç§ã¯ä»¥äžã䜿çšããŠåé¡ã解決ããŸããïŒ
kubectl get pods -n my-ns -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].resources.limits.cpu} {"\n"}{end}' |awk '{sum+=$2 ; print $0} END{print "sum=",sum}'
ããã¯ããããã«ãããã1ã€ã®ã³ã³ããããå«ãŸããŠããªãåå空éã§æ©èœããŸãã
@xmikãããç§ã¯k8 1.7ã䜿çšããŠããŠãhepasterãå®è¡ããŠããŸãã $ kubectl top node --heapster-namespace = kube-systemãå®è¡ãããšãããšã©ãŒïŒã¡ããªãã¯ã¯ãŸã å©çšã§ããŸããããšè¡šç€ºãããŸãã ãšã©ãŒã«åãçµãããã®æãããã¯ãããŸããïŒ
@abushoeb ïŒ
kubectl top
ããã©ã°--heapster-namespace
ãµããŒãããŠãããšã¯æããŸããã<none>
ãŸãããïŒ æ¬¡ã®ãããªã³ãã³ãã§åŸè
ã確èªããŠãã ããïŒ kubectl -n kube-system describe svc/heapster
@xmikããã§ããããŒãã¹ã¿ãŒãæ£ããæ§æãããŠããŸããã§ããã ã©ããããããšãã çŸåšåäœããŠããŸãã ãªã¢ã«ã¿ã€ã ã®GPU䜿çšæ å ±ãååŸããæ¹æ³ããããã©ããç¥ã£ãŠããŸããïŒ ãã®ãããã³ãã³ãã¯ãCPUãšã¡ã¢ãªã®äœ¿çšéã®ã¿ã瀺ããŸãã
ããããŸããã :(
@abushoebåããšã©ãŒããšã©ãŒïŒã¡ããªãã¯ã¯ãŸã å©çšã§ããŸãããã衚瀺ãããŸãã ã©ã®ããã«ä¿®æ£ããŸãããïŒ
@avgKolã¯ãæåã«ããŒãã¹ã¿ãŒã®ãããã€ã確èªããŸãã ç§ã®å Žåãæ£ããå±éãããŠããŸããã§ããã ããã確èªãã1ã€ã®æ¹æ³ã¯ã curl -L http://heapster-pod-ip:heapster-service-port/api/v1/model/metrics/
ãªã©ã®CURLã³ãã³ããä»ããŠã¡ããªãã¯ã«ã¢ã¯ã»ã¹ããããšã§ãã ã¡ããªãã¯ã衚瀺ãããªãå Žåã¯ãããŒãã¹ã¿ãŒããããšãã°ã確èªããŠãã ããã ããã¹ã¿ãŒã¡ããªãã¯ã«ã¯ããã®ããã«Webãã©ãŠã¶ãããã¢ã¯ã»ã¹ã§ããŸãã
èå³ã®ããæ¹ã¯ãKubernetesãªãœãŒã¹ã®äœ¿çšéïŒããã³ã³ã¹ãïŒã®éçHTMLãçæããããŒã«ãäœæããŸããïŒ https ïŒ
@hjacobsç§ã¯ãã®ããŒã«ã䜿ãããã®ã§ãããPythonããã±ãŒãžã®ã€ã³ã¹ããŒã«/䜿çšã®ãã¡ã³ã§ã¯ãããŸããã Dockerã€ã¡ãŒãžãšããŠããã±ãŒãžåããŠãããããã§ããïŒ
@tonglilãããžã§ã¯ãã¯ããªãæ©ã段éã§ãããç§ã®èšç»ã§ã¯ãããã«äœ¿çšã§ããDockerã€ã¡ãŒãžïŒå«ãïŒãçšæããäºå®ã§ãã kubectl apply -f ..
ã§å®è¡ã§ããWebãµãŒããŒã
ãããç§ã®ããã«åãããã®ã§ãïŒ
kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.allocatable.memory}{'\t'}{.status.allocatable.cpu}{'\n'}{end}"
åºåã¯æ¬¡ã®ããã«è¡šç€ºãããŸãã
ip-192-168-101-177.us-west-2.compute.internal 251643680Ki 32
ip-192-168-196-254.us-west-2.compute.internal 251643680Ki 32
@tonglil Dockerã€ã¡ãŒãžãå©çšå¯èœã«ãªããŸããïŒ https ïŒ
90æ¥éæäœããªããšãåé¡ã¯å€ããªããŸãã
/remove-lifecycle stale
ããŠãåé¡ãæ°èŠãšããŠããŒã¯ããŸãã
å€ãåé¡ã¯ãããã«30æ¥ééã¢ã¯ãã£ãã«ãªããšè
æããæçµçã«ã¯éããŸãã
ãã®åé¡ãä»ãã解決ã§ããå Žåã¯ã /close
ã
SIG-ãã¹ããkubernetes /ãã¹ãã»ã€ã³ãã©ããã³/ãŸãã¯ãžã®ãã£ãŒãããã¯ãéä¿¡fejta ã
/ lifecycle stale
/ remove-lifecycle stale
æ¯æããããã§ãç§ã®ã°ãŒã°ã«ã¯ç§ããã®åé¡ã«æ»ããŸãã é·ãjq
æååã䜿çšããããäžé£ã®èšç®ãå«ãGrafanaããã·ã¥ããŒãã䜿çšããŠãå¿
èŠãªçµ±èšãååŸããæ¹æ³ããããŸã...ãããã次ã®ãããªã³ãã³ããããã°ãããã¯_so_çŽ æŽãããã§ãããã
# kubectl utilization cluster
cores: 19.255/24 cores (80%)
memory: 16.4/24 GiB (68%)
# kubectl utilization [node name]
cores: 3.125/4 cores (78%)
memory: 2.1/4 GiB (52%)
ïŒ @chrishiestandãã¹ã¬ããã®ååã§è¿°ã¹ããã®ãšåæ§ã§ãïŒã
ç§ã¯é±ã«æ°åã®ãã¹ãã¯ã©ã¹ã¿ãŒãæ§ç¯ããŠç Žæ£ããããšããããããŸããèªååãæ§ç¯ããããã·ã§ã«ãšã€ãªã¢ã¹ãè¿œå ãããããå¿ èŠã¯ãªãããããã ãå€ãã®ãµãŒããŒãé 眮ããŠããããã®ã¢ããªãæããã°ãã確èªã§ããŸãããããã«ã€ããŠãç§ã®å šäœçãªäœ¿çšç/å§åã¯äœã§ããïŒã
ç¹ã«å°èŠæš¡ã§é£è§£ãªã¯ã©ã¹ã¿ãŒã®å Žåãæãžã®èªåã¹ã±ãŒã«ãèšå®ããããããŸãããïŒéåžžã¯ãéã®çç±ã§ïŒããã€ããŒãªãããã®èªåã¹ã±ãŒãªã³ã°ã€ãã³ããåŠçããã®ã«ååãªãªãŒããŒãããããããã©ãããç¥ãå¿ èŠããããŸãã
ãã1ã€ã®ãªã¯ãšã¹ã-åå空éããšã®ãªãœãŒã¹äœ¿çšéã®åèšã確èªã§ããããã«ãããã®ã§ïŒå°ãªããšãããããã€ã¡ã³ã/ã©ãã«ããšã圹ç«ã¡ãŸãïŒãã©ã®åå空éã䟡å€ãããããææ¡ããããšã§ããªãœãŒã¹ããªãã³ã°ã®äœæ¥ã«éäžã§ããŸããã«éäžããŸãã
@geerlingguyã説æããæ©èœãæäŸããå°ããªãã©ã°ã€ã³kubectl-view-utilizationãäœæããŸããã krewãã©ã°ã€ã³ãããŒãžã£ãŒãä»ããã€ã³ã¹ããŒã«ãå¯èœã§ãã ããã¯BASHã§å®è£
ãããŠãããawkãšbcãå¿
èŠã§ãã
kubectlãã©ã°ã€ã³ãã¬ãŒã ã¯ãŒã¯ã䜿çšãããšããããã³ã¢ããŒã«ããå®å
šã«æœè±¡åã§ããŸãã
ä»ã®äººããã®èª²é¡ã«çŽé¢ããŠãããŠããããã§ãã Kube EagleïŒããã¡ããŠã¹ãšã¯ã¹ããŒã¿ãŒïŒãäœæããŸãããããã«ãããã¯ã©ã¹ã¿ãŒãªãœãŒã¹ã®æŠèŠãææ¡ããæçµçã«ã¯å©çšå¯èœãªããŒããŠã§ã¢ãªãœãŒã¹ãããæå¹ã«æŽ»çšã§ããããã«ãªããŸããã
ããã¯ãå®éã®ããŒã䜿çšçãããŒãã«åœ¢åŒã§ååŸããããã®Pythonã¹ã¯ãªããã§ãã
https://github.com/amelbakry/kube-node-utilization
KubernetesããŒãã®äœ¿çšç.........ã
+ ------------------------------------------------ + -------- + -------- +
| NodeName | CPU | ã¡ã¢ãª|
+ ------------------------------------------------ + -------- + -------- +
| ip-176-35-32-139.eu-central-1.compute.internal | 13.49ïŒ
| 60.87ïŒ
|
| ip-176-35-26-21.eu-central-1.compute.internal | 5.89ïŒ
| 15.10ïŒ
|
| ip-176-35-9-122.eu-central-1.compute.internal | 8.08ïŒ
| 65.51ïŒ
|
| ip-176-35-22-243.eu-central-1.compute.internal | 6.29ïŒ
| 19.28ïŒ
|
+ ------------------------------------------------ + -------- + -------- +
å°ãªããšã@amelbakryã«ãšã£ãŠéèŠãªã®ã¯ãã¯ã©ã¹ã¿ãŒã¬ãã«ã®äœ¿çšçã§ããããã·ã³ãè¿œå ããå¿ èŠããããŸããïŒã /ãããã€ãã®ãã·ã³ãåé€ããå¿ èŠããããŸããïŒã /ãã¯ã©ã¹ã¿ãŒãããã«ã¹ã±ãŒã«ã¢ããããããšãæåŸ ããå¿ èŠããããŸããïŒã ..
ãšãã§ã¡ã©ã«ã¹ãã¬ãŒãžã®äœ¿çšã«ã€ããŠã¯ã©ãã§ããïŒ ãã¹ãŠã®ããããããããååŸããæ¹æ³ã¯ãããŸããïŒ
ãããŠã圹ã«ç«ã€ããã«ãç§ã®ãã³ãïŒ
kubectl get pods -o json -n kube-system | jq -r '.items[] | .metadata.name + " \n Req. RAM: " + .spec.containers[].resources.requests.memory + " \n Lim. RAM: " + .spec.containers[].resources.limits.memory + " \n Req. CPU: " + .spec.containers[].resources.requests.cpu + " \n Lim. CPU: " + .spec.containers[].resources.limits.cpu + " \n Req. Eph. DISK: " + .spec.containers[].resources.requests["ephemeral-storage"] + " \n Lim. Eph. DISK: " + .spec.containers[].resources.limits["ephemeral-storage"] + "\n"'
...
kube-proxy-xlmjt
Req. RAM: 32Mi
Lim. RAM: 256Mi
Req. CPU: 100m
Lim. CPU:
Req. Eph. DISK: 100Mi
Lim. Eph. DISK: 512Mi
...
echo "\nRAM Requests TOTAL:" && kubectl describe namespace kube-system | grep 'requests.memory' && echo "\nRAM Requests:\n" && kubectl get pods -o json -n kube-system | jq -r '.items[] | .spec.containers[].resources.requests.memory + " | " + .metadata.name'
echo "\nRAM Limits TOTAL:" && kubectl describe namespace kube-system | grep 'limits.memory' && echo "\nRAM Limits:\n" && kubectl get pods -o json -n kube-system | jq -r '.items[] | .spec.containers[].resources.limits.memory + " | " + .metadata.name'
echo "\nCPU Requests TOTAL:" && kubectl describe namespace kube-system | grep 'requests.cpu' && echo "\nCPU Requests:\n" && kubectl get pods -o json -n kube-system | jq -r '.items[] | .spec.containers[].resources.requests.cpu + " | " + .metadata.name'
echo "\nCPU Limits TOTAL:" && kubectl describe namespace kube-system | grep 'limits.cpu' && echo "\nCPU Limits:\n" && kubectl get pods -o json -n kube-system | jq -r '.items[] | .spec.containers[].resources.limits.cpu + " | " + .metadata.name'
echo "\nEph. DISK Requests TOTAL:" && kubectl describe namespace kube-system | grep 'requests.ephemeral-storage' && echo "\nEph. DISK Requests:\n" && kubectl get pods -o json -n kube-system | jq -r '.items[] | .spec.containers[].resources.requests["ephemeral-storage"] + " | " + .metadata.name'
echo "\nEph. DISK Limits TOTAL:" && kubectl describe namespace kube-system | grep 'limits.ephemeral-storage' && echo "\nEph. DISK Limits:\n" && kubectl get pods -o json -n kube-system | jq -r '.items[] | .spec.containers[].resources.limits["ephemeral-storage"] + " | " + .metadata.name'
RAM Requests TOTAL:
requests.memory 3504Mi 16Gi
RAM Requests:
64Mi | aws-alb-ingress-controller-6b569b448c-jzj6f
...
@ kivagant-baãã®ã¹ããããè©ŠããŠãããŒãããšã®ãããã¡ããªãã¯ãååŸã§ããŸããããšãã°ã次ã®ãããªãã¹ãŠã®ããŒããååŸã§ããŸãã
https://github.com/amelbakry/kube-node-utilization
def get_pod_metrics_per_nodeïŒnodeïŒïŒ
pod_metrics = "/ api / v1 / podsïŒfieldSelector = spec.nodeNameïŒ
3D" +ããŒã
å¿ç= api_client.call_apiïŒpod_metricsã
'GET'ãauth_settings = ['BearerToken']ã
response_type = 'json'ã_ preload_content = FalseïŒ
response = json.loadsïŒresponse [0] .data.decodeïŒ 'utf-8'ïŒïŒ
å¿çãè¿ã
@kierenjã©ã®ã¯ã©ãŠãkubernetesãå®è¡ãããŠãããã«åºã¥ãcluster-autoscalerã³ã³ããŒãã³ãã容éãåŠçããå¿ èŠããããšæããŸãã ãããããªãã®è³ªåãã©ããããããªãã
90æ¥éæäœããªããšãåé¡ã¯å€ããªããŸãã
/remove-lifecycle stale
ããŠãåé¡ãæ°èŠãšããŠããŒã¯ããŸãã
å€ãåé¡ã¯ãããã«30æ¥ééã¢ã¯ãã£ãã«ãªããšè
æããæçµçã«ã¯éããŸãã
ãã®åé¡ãä»ãã解決ã§ããå Žåã¯ã /close
ã
SIG-ãã¹ããkubernetes /ãã¹ãã»ã€ã³ãã©ããã³/ãŸãã¯ãžã®ãã£ãŒãããã¯ãéä¿¡fejta ã
/ lifecycle stale
/ remove-lifecycle stale
ä»ã®å€ãã®äººãšåãããã«ãCLIïŒAWS ASGãªã©ïŒãä»ããŠã¯ã©ã¹ã¿ãŒã管çããããã«å¿ èŠãªããã¯ãååŸããããã«ãããã«äœå¹Žãæ»ã£ãŠããŸã
@etopeterãã®ãããªã¯ãŒã«ãªCLIãã©ã°ã€ã³ãããããšãããããŸãã ãã®ã·ã³ãã«ãã倧奜ãã§ãã æ°åãšãã®æ£ç¢ºãªæå³ãããããç解ããæ¹æ³ã«ã€ããŠäœãã¢ããã€ã¹ã¯ãããŸããïŒ
誰ããããã®äœ¿çšãèŠã€ããããšãã§ãããªããããã«ãããã®çŸåšã®å¶éããã³ãããã¹ã¯ãªããããããŸãã
kubectl get pods --all-namespaces -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}\
{'.spec.nodeName -'} {.spec.nodeName}{'\n'}\
{range .spec.containers[*]}\
{'requests.cpu -'} {.resources.requests.cpu}{'\n'}\
{'limits.cpu -'} {.resources.limits.cpu}{'\n'}\
{'requests.memory -'} {.resources.requests.memory}{'\n'}\
{'limits.memory -'} {.resources.limits.memory}{'\n'}\
{'\n'}{end}\
{'\n'}{end}"
åºåäŸ
..ã
kube- systemïŒaddon-http-application-routing-nginx-ingress-controller-6bq49l7
.spec.nodeName --aks-agentpool-84550961-0
requests.cpu-
Limits.cpu-
requests.memory-
Limits.memory-
kube-ã·ã¹ãã ïŒcoredns-696c4d987c-pjht8
.spec.nodeName --aks-agentpool-84550961-0
requests.cpu-100m
Limits.cpu-
requests.memory-70Mi
Limits.memory-170Mi
kube-ã·ã¹ãã ïŒcoredns-696c4d987c-rtkl6
.spec.nodeName --aks-agentpool-84550961-2
requests.cpu-100m
Limits.cpu-
requests.memory-70Mi
Limits.memory-170Mi
kube-ã·ã¹ãã ïŒcoredns-696c4d987c-zgcbp
.spec.nodeName --aks-agentpool-84550961-1
requests.cpu-100m
Limits.cpu-
requests.memory-70Mi
Limits.memory-170Mi
kube- systemïŒcoredns-autoscaler-657d77ffbf-7t72x
.spec.nodeName --aks-agentpool-84550961-2
requests.cpu-20m
Limits.cpu-
requests.memory-10Mi
Limits.memory-
kube-ã·ã¹ãã ïŒcoredns-autoscaler-657d77ffbf-zrp6m
.spec.nodeName --aks-agentpool-84550961-0
requests.cpu-20m
Limits.cpu-
requests.memory-10Mi
Limits.memory-
kube-ã·ã¹ãã ïŒ kube
.spec.nodeName --aks-agentpool-84550961-1
requests.cpu-100m
Limits.cpu-
requests.memory-
Limits.memory-
..ã
@ Spaceman1861äŸãæããŠããã ããŸããïŒ
@ eduncan911å®äº
次ã®ããã«ãããŒãã«åœ¢åŒã§åºåãèªãæ¹ãç°¡åã ãšæããŸãïŒããã¯å¶éã§ã¯ãªããªã¯ãšã¹ãã衚瀺ããŸãïŒïŒ
kubectl get pods -o custom-columns=NAME:.metadata.name,"CPU(cores)":.spec.containers[*].resources.requests.cpu,"MEMORY(bytes)":.spec.containers[*].resources.requests.memory --all-namespaces
ãµã³ãã«åºåïŒ
NAME CPU(cores) MEMORY(bytes)
pod1 100m 128Mi
pod2 100m 128Mi,128Mi
@ lentzi90åèãŸã§ã«ïŒ Kubernetes Webãã¥ãŒïŒãkubectlfor the webãïŒããã¢ã§åæ§ã®ã«ã¹ã¿ã åã衚瀺ã§ããŸãïŒ https ïŒ//kube-web-view.demo.j-serv.de/clusters/local/namespaces/ ] .resources.requests.cpuïŒïŒ 3BMemory + Requests = joinïŒïŒ 27ãïŒ 20ïŒ 27ãïŒ 20spec.containers [ ] .resources.requests.memoryïŒ
ã«ã¹ã¿ã åã®ããã¥ã¡ã³ãïŒ https ïŒ//kube-web-view.readthedocs.io/en/latest/features.html#listing -resources
Ooooå æ²¢ã®ãã
ããã¯ã䜿çšç¶æ³ãšæ§æãããå¶éã«åºã¥ããŠãå±éäžã®ãããã®äœ¿çšçãååŸããããã®ã¹ã¯ãªããïŒdeployment-health.shïŒã§ãã
https://github.com/amelbakry/kubernetes-scripts
@ lentzi90ãš@ylogxã®åçã«è§ŠçºãããŠãå®éã®ãªãœãŒã¹äœ¿çšéïŒ kubectl top pods
ïŒãšãªãœãŒã¹èŠæ±ãšå¶éã瀺ãç¬èªã®å€§ããªã¹ã¯ãªãããäœæããŸããã
join -a1 -a2 -o 0,1.2,1.3,2.2,2.3,2.4,2.5, -e '<none>' <(kubectl top pods) <(kubectl get pods -o custom-columns=NAME:.metadata.name,"CPU_REQ(cores)":.spec.containers[*].resources.requests.cpu,"MEMORY_REQ(bytes)":.spec.containers[*].resources.requests.memory,"CPU_LIM(cores)":.spec.containers[*].resources.limits.cpu,"MEMORY_LIM(bytes)":.spec.containers[*].resources.limits.memory) | column -t -s' '
åºåäŸïŒ
NAME CPU(cores) MEMORY(bytes) CPU_REQ(cores) MEMORY_REQ(bytes) CPU_LIM(cores) MEMORY_LIM(bytes)
xxxxx-847dbbc4c-c6twt 20m 110Mi 50m 150Mi 150m 250Mi
xxx-service-7b6b9558fc-9cq5b 19m 1304Mi 1 <none> 1 <none>
xxxxxxxxxxxxxxx-hook-5d585b449b-zfxmh 0m 46Mi 200m 155M 200m 155M
ã¿ãŒããã«ã§kstats
ã䜿çšããããã®ãšã€ãªã¢ã¹ã¯æ¬¡ã®ãšããã§ãã
alias kstats='join -a1 -a2 -o 0,1.2,1.3,2.2,2.3,2.4,2.5, -e '"'"'<none>'"'"' <(kubectl top pods) <(kubectl get pods -o custom-columns=NAME:.metadata.name,"CPU_REQ(cores)":.spec.containers[*].resources.requests.cpu,"MEMORY_REQ(bytes)":.spec.containers[*].resources.requests.memory,"CPU_LIM(cores)":.spec.containers[*].resources.limits.cpu,"MEMORY_LIM(bytes)":.spec.containers[*].resources.limits.memory) | column -t -s'"'"' '"'"
PSç§ã¯èªåã®Macã§ã®ã¿ã¹ã¯ãªããããã¹ãããŸãããLinuxãšWindowsã®å Žåãããã€ãã®å€æŽãå¿ èŠã«ãªãå ŽåããããŸã
ããã¯ã䜿çšç¶æ³ãšæ§æãããå¶éã«åºã¥ããŠãå±éäžã®ãããã®äœ¿çšçãååŸããããã®ã¹ã¯ãªããïŒdeployment-health.shïŒã§ãã
https://github.com/amelbakry/kubernetes-scripts
@amelbakry Macã§å®è¡ããããšãããšã次ã®ãšã©ãŒãçºçããŸãã
Failed to execute process './deployment-health.sh'. Reason:
exec: Exec format error
The file './deployment-health.sh' is marked as an executable but could not be run by the operating system.
ãã£ãšã
ãïŒïŒã äžçªæåã®è¡ã§ããå¿
èŠããããŸãã 代ããã«ãbashããè©ŠããŠãã ãã
./deployment-health.sh "ã䜿çšããŠãåé¡ãåé¿ããŸãã
/ charles
PSã åé¡ãä¿®æ£ããããã«PRãéå§ãããŸãã
10:19ããããªã»ã ãŒã¢ã®æ°Žã2019幎9æ25æ¥ã«ã¯[email protected]
æžããŸããïŒ
ããã¯ããããã®äœ¿çšçãååŸããããã®ã¹ã¯ãªããïŒdeployment-health.shïŒã§ãã
䜿çšç¶æ³ãšæ§æãããå¶éã«åºã¥ããå±é
https://github.com/amelbakry/kubernetes-scripts@amelbakryhttps ïŒ//github.com/amelbakry次ã®æ å ±ãååŸããŠããŸã
Macã§å®è¡ããããšãããšãã«ãšã©ãŒãçºçããŸããïŒããã»ã¹ './deployment-health.sh'ã®å®è¡ã«å€±æããŸããã çç±ïŒ
execïŒå®è¡ãã©ãŒããããšã©ãŒ
ãã¡ã€ã« './deployment-health.sh'ã¯å®è¡å¯èœãã¡ã€ã«ãšããŠããŒã¯ãããŠããŸããããªãã¬ãŒãã£ã³ã°ã·ã¹ãã ã§å®è¡ã§ããŸããã§ãããâ
ãã®ã¹ã¬ããã«ãµãã¹ã¯ã©ã€ãããŠããããããããåãåã£ãŠããŸãã
ãã®ã¡ãŒã«ã«çŽæ¥è¿ä¿¡ããGitHubã§è¡šç€ºããŠãã ãã
https://github.com/kubernetes/kubernetes/issues/17512?email_source=notifications&email_token=AACA3TODQEUPWK3V3UY3SF3QLOMSFA5CNFSM4BUXCUG2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVX
ãŸãã¯ã¹ã¬ããããã¥ãŒãããŸã
https://github.com/notifications/unsubscribe-auth/AACA3TOPOBIWXFX2DAOT6JDQLOMSFANCNFSM4BUXCUGQ
ã
@cgthayerãã®PRä¿®æ£ãã°ããŒãã«ã«é©çšãããå ŽåããããŸãã ãŸããMacOs Mojaveã§ã¹ã¯ãªãããå®è¡ãããšã䜿çšããŠããªãEUåºæã®ãŸãŒã³åãªã©ãå€æ°ã®ãšã©ãŒã衚瀺ãããŸããã ãããã®ã¹ã¯ãªããã¯ç¹å®ã®ãããžã§ã¯ãçšã«äœæãããŠããããã§ãã
ãããjoinexã®ä¿®æ£ããŒãžã§ã³ã§ãã ããã¯ãåã®åèšãè¡ããŸãã
oc_ns_pod_usage () {
# show pod usage for cpu/mem
ns="$1"
usage_chk3 "$ns" || return 1
printf "$ns\n"
separator=$(printf '=%.0s' {1..50})
printf "$separator\n"
output=$(join -a1 -a2 -o 0,1.2,1.3,2.2,2.3,2.4,2.5, -e '<none>' \
<(kubectl top pods -n $ns) \
<(kubectl get -n $ns pods -o custom-columns=NAME:.metadata.name,"CPU_REQ(cores)":.spec.containers[*].resources.requests.cpu,"MEMORY_REQ(bytes)":.spec.containers[*].resources.requests.memory,"CPU_LIM(cores)":.spec.containers[*].resources.limits.cpu,"MEMORY_LIM(bytes)":.spec.containers[*].resources.limits.memory))
totals=$(printf "%s" "$output" | awk '{s+=$2; t+=$3; u+=$4; v+=$5; w+=$6; x+=$7} END {print s" "t" "u" "v" "w" "x}')
printf "%s\n%s\nTotals: %s\n" "$output" "$separator" "$totals" | column -t -s' '
printf "$separator\n"
}
äŸ
$ oc_ns_pod_usage ls-indexer
ls-indexer
==================================================
NAME CPU(cores) MEMORY(bytes) CPU_REQ(cores) MEMORY_REQ(bytes) CPU_LIM(cores) MEMORY_LIM(bytes)
ls-indexer-f5-7cd5859997-qsfrp 15m 741Mi 1 1000Mi 2 2000Mi
ls-indexer-f5-7cd5859997-sclvg 15m 735Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-4b7j2 92m 1103Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-5xj5l 88m 1124Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-6vvl2 92m 1132Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-85f66 85m 1151Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-924jz 96m 1124Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-g6gx8 119m 1119Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-hkhnt 52m 819Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-hrsrs 51m 1122Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-j4qxm 53m 885Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-lxlrb 83m 1215Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-mw6rt 86m 1131Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-pbdf8 95m 1115Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-qk9bm 91m 1141Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-sdv9r 54m 1194Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-t67v6 75m 1234Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-tkxs2 88m 1364Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-v6jl2 53m 747Mi 1 1000Mi 2 2000Mi
ls-indexer-filebeat-7858f56c9-wkqr7 53m 838Mi 1 1000Mi 2 2000Mi
ls-indexer-metricbeat-74d89d7d85-jp8qc 190m 1191Mi 1 1000Mi 2 2000Mi
ls-indexer-metricbeat-74d89d7d85-jv4bv 192m 1162Mi 1 1000Mi 2 2000Mi
ls-indexer-metricbeat-74d89d7d85-k4dcd 194m 1144Mi 1 1000Mi 2 2000Mi
ls-indexer-metricbeat-74d89d7d85-n46tz 192m 1155Mi 1 1000Mi 2 2000Mi
ls-indexer-packetbeat-db98f6fdf-8x446 35m 1198Mi 1 1000Mi 2 2000Mi
ls-indexer-packetbeat-db98f6fdf-gmxxd 22m 1203Mi 1 1000Mi 2 2000Mi
ls-indexer-syslog-5466bc4d4f-gzxw8 27m 1125Mi 1 1000Mi 2 2000Mi
ls-indexer-syslog-5466bc4d4f-zh7st 29m 1153Mi 1 1000Mi 2 2000Mi
==================================================
Totals: 2317 30365 28 28000 56 56000
==================================================
ãããŠãusage_chk3ãšã¯äœã§ããïŒ
ç§ã®ããŒã«ãå ±æããããšæããŸã;-) kubectl-view-allocationsïŒå²ãåœãŠãäžèŠ§è¡šç€ºããkubectlãã©ã°ã€ã³ïŒcpuãmemoryãgpuã... XèŠæ±ãå¶éãå²ãåœãŠå¯èœã...ïŒã ããªã¯ãšã¹ãã¯å€§æè¿ã§ãã
ç§ã¯ïŒå éšã®ïŒãŠãŒã¶ãŒã«ã誰ãäœãå²ãåœãŠãããã確èªããæ¹æ³ãæäŸãããã®ã§ããããäœããŸããã ããã©ã«ãã§ã¯ãã¹ãŠã®ãªãœãŒã¹ã衚瀺ãããŸããã次ã®ãµã³ãã«ã§ã¯ãââååã«ãgpuããå«ãŸãããªãœãŒã¹ã®ã¿ãèŠæ±ããŸãã
> kubectl-view-allocations -r gpu
Resource Requested %Requested Limit %Limit Allocatable Free
nvidia.com/gpu 7 58% 7 58% 12 5
ââ node-gpu1 1 50% 1 50% 2 1
â ââ xxxx-784dd998f4-zt9dh 1 1
ââ node-gpu2 0 0% 0 0% 2 2
ââ node-gpu3 0 0% 0 0% 2 2
ââ node-gpu4 1 50% 1 50% 2 1
â ââ aaaa-1571819245-5ql82 1 1
ââ node-gpu5 2 100% 2 100% 2 0
â ââ bbbb-1571738839-dfkhn 1 1
â ââ bbbb-1571738888-52c4w 1 1
ââ node-gpu6 2 100% 2 100% 2 0
ââ bbbb-1571738688-vlxng 1 1
ââ cccc-1571745684-7k6bn 1 1
ä»åŸã®ããŒãžã§ã³ïŒ
ã€ã³ã¹ãã¬ãŒã·ã§ã³ãäžããŠãããkubectl-view-utilizationã«æè¬ã
ãããç§ã®ããã¯ã§ã
kubectl describe nodes | grep -A 2 -e "^\\s*CPU Requests"
ããã¯ããæ©èœããŸãã:(
kubectl describe node | grep -A5 "Allocated"
ãè©ŠããŠã¿ãŠãã ãã
ããã¯çŸåšã芪æãç«ãŠãŠ4çªç®ã«ãªã¯ãšã¹ãã®å€ãåé¡ã§ãããããã§ãpriority/backlog
ã§ãã
誰ããç§ãæ£ããæ¹åã«åããããšãã§ããã°ããŸãã¯ç§ãã¡ãææ¡ãå®æãããããšãã§ããã°ãç§ã¯ãããçªãåºããŠåãã§ããŸãã @davidBã®ããŒã«ã®UXã¯kubectl
å±ããŠããŸãã
次ã®ã³ãã³ãã䜿çšããŸãïŒ kubectl top nodes
ïŒ kubectl describe node
äžè²«ããçµæãåŸãããŸãã
ããšãã°ãæåã®CPUïŒã³ã¢ïŒã¯1064mã§ããããã®çµæã¯2çªç®ã®CPUïŒ1480mïŒã§ã¯ãã§ããã§ããŸããã
kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
abcd-p174e23ea5qa4g279446c803f82-abc-node-0 1064m 53% 6783Mi 88%
kubectl describe node abcd-p174e23ea5qa4g279446c803f82-abc-node-0
...
Resource Requests Limits
-------- -------- ------
cpu 1480m (74%) 1300m (65%)
memory 2981486848 (37%) 1588314624 (19%)
kubectl top nodes
ã䜿çšããã«CPUïŒã³ã¢ïŒãååŸããããšã«ã€ããŠäœãã¢ã€ãã¢
ç§ã®ããŒã«ãå ±æããããšæããŸã;-) kubectl-view-allocationsïŒå²ãåœãŠãäžèŠ§è¡šç€ºããkubectlãã©ã°ã€ã³ïŒcpuãmemoryãgpuã... XèŠæ±ãå¶éãå²ãåœãŠå¯èœã...ïŒã ããªã¯ãšã¹ãã¯å€§æè¿ã§ãã
ç§ã¯ïŒå éšã®ïŒãŠãŒã¶ãŒã«ã誰ãäœãå²ãåœãŠãããã確èªããæ¹æ³ãæäŸãããã®ã§ããããäœããŸããã ããã©ã«ãã§ã¯ãã¹ãŠã®ãªãœãŒã¹ã衚瀺ãããŸããã次ã®ãµã³ãã«ã§ã¯ãââååã«ãgpuããå«ãŸãããªãœãŒã¹ã®ã¿ãèŠæ±ããŸãã
> kubectl-view-allocations -r gpu Resource Requested %Requested Limit %Limit Allocatable Free nvidia.com/gpu 7 58% 7 58% 12 5 ââ node-gpu1 1 50% 1 50% 2 1 â ââ xxxx-784dd998f4-zt9dh 1 1 ââ node-gpu2 0 0% 0 0% 2 2 ââ node-gpu3 0 0% 0 0% 2 2 ââ node-gpu4 1 50% 1 50% 2 1 â ââ aaaa-1571819245-5ql82 1 1 ââ node-gpu5 2 100% 2 100% 2 0 â ââ bbbb-1571738839-dfkhn 1 1 â ââ bbbb-1571738888-52c4w 1 1 ââ node-gpu6 2 100% 2 100% 2 0 ââ bbbb-1571738688-vlxng 1 1 ââ cccc-1571745684-7k6bn 1 1
ä»åŸã®ããŒãžã§ã³ïŒ
* will allow to hide (node, pod) level or to choose how to group, (eg to provide an overview with only resources) * installation via curl, krew, brew, ... (currently binary are available under the releases section of github)
ã€ã³ã¹ãã¬ãŒã·ã§ã³ãäžããŠãããkubectl-view-utilizationã«æè¬ã
ããã«ã¡ã¯ããããããªããæ°ãããã£ã¹ããªãã¥ãŒã·ã§ã³ã®ããã«ãã£ãšã³ã³ãã€ã«ããããã€ããªãæäŸãããªãããã¯çŽ æŽãããã§ãããã Ubuntu 16.04ã§ã¯ã
kubectl-view-allocationsïŒ/lib/x86_64-linux-gnu/libc.so.6ïŒããŒãžã§ã³ `GLIBC_2.25 'ãèŠã€ãããŸããïŒkubectl-view-allocationsã§å¿ èŠïŒ
dpkg -l |grep glib
ii libglib2.0-0ïŒamd64 2.48.2-0ubuntu4.4
@omerfsenã¯ãæ°ããããŒãžã§ã³kubectl-view-allocationsãè©ŠããŠããã±ããããŒãžã§ã³ `GLIBC_2.25 'ãèŠã€ãããŸããïŒ14ã«ã³ã¡ã³ãã§ããŸããïŒ
ã¯ã©ã¹ã¿å šäœã§å²ãåœãŠãååŸããç§ã®æ¹æ³ïŒ
$ kubectl get po --all-namespaces -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]} {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}"
次ã®ãããªãã®ãçæãããŸãã
kube-system:heapster-v1.5.0-dc8df7cc9-7fqx6 heapster:88m heapster-nanny:50m kube-system:kube-dns-6cdf767cb8-cjjdr kubedns:100m dnsmasq:150m sidecar:10m prometheus-to-sd: kube-system:kube-dns-6cdf767cb8-pnx2g kubedns:100m dnsmasq:150m sidecar:10m prometheus-to-sd: kube-system:kube-dns-autoscaler-69c5cbdcdd-wwjtg autoscaler:20m kube-system:kube-proxy-gke-cluster1-default-pool-cd7058d6-3tt9 kube-proxy:100m kube-system:kube-proxy-gke-cluster1-preempt-pool-57d7ff41-jplf kube-proxy:100m kube-system:kubernetes-dashboard-7b9c4bf75c-f7zrl kubernetes-dashboard:50m kube-system:l7-default-backend-57856c5f55-68s5g default-http-backend:10m kube-system:metrics-server-v0.2.0-86585d9749-kkrzl metrics-server:48m metrics-server-nanny:5m kube-system:tiller-deploy-7794bfb756-8kxh5 tiller:10m
ããã§æç¶ãã¹ãã¢ã³ãµãŒã
äžèšã®ã¹ã¯ãªããã«è§ŠçºãããŠã䜿çšæ³ããªã¯ãšã¹ããå¶éã衚瀺ããããã«æ¬¡ã®ã¹ã¯ãªãããäœæããŸããã
join -1 2 -2 2 -a 1 -a 2 -o "2.1 0 1.3 2.3 2.5 1.4 2.4 2.6" -e '<wait>' \
<( kubectl top pods --all-namespaces | sort --key 2 -b ) \
<( kubectl get pods --all-namespaces -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,"CPU_REQ(cores)":.spec.containers[*].resources.requests.cpu,"MEMORY_REQ(bytes)":.spec.containers[*].resources.requests.memory,"CPU_LIM(cores)":.spec.containers[*].resources.limits.cpu,"MEMORY_LIM(bytes)":.spec.containers[*].resources.limits.memory | sort --key 2 -b ) \
| column -t -s' '
join
ã·ã§ã«ã¹ã¯ãªããã¯äžŠã¹æ¿ãããããªã¹ããæ³å®ããŠãããããäžèšã®ã¹ã¯ãªããã¯å€±æããŸããã
çµæãšããŠãçŸåšã®äœ¿çšç¶æ³ãäžããããããã€ã¡ã³ããããèŠæ±ãšïŒããã§ïŒãã¹ãŠã®åå空éã®å¶éã確èªã§ããŸãã
NAMESPACE NAME CPU(cores) CPU_REQ(cores) CPU_LIM(cores) MEMORY(bytes) MEMORY_REQ(bytes) MEMORY_LIM(bytes)
kube-system aws-node-2jzxr 18m 10m <none> 41Mi <none> <none>
kube-system aws-node-5zn6w <wait> 10m <none> <wait> <none> <none>
kube-system aws-node-h8cc5 20m 10m <none> 42Mi <none> <none>
kube-system aws-node-h9n4f 0m 10m <none> 0Mi <none> <none>
kube-system aws-node-lz5fn 17m 10m <none> 41Mi <none> <none>
kube-system aws-node-tpmxr 20m 10m <none> 39Mi <none> <none>
kube-system aws-node-zbkkh 23m 10m <none> 47Mi <none> <none>
cluster-autoscaler cluster-autoscaler-aws-cluster-autoscaler-5db55fbcf8-mdzkd 1m 100m 500m 9Mi 300Mi 500Mi
cluster-autoscaler cluster-autoscaler-aws-cluster-autoscaler-5db55fbcf8-q9xs8 39m 100m 500m 75Mi 300Mi 500Mi
kube-system coredns-56b56b56cd-bb26t 6m 100m <none> 11Mi 70Mi 170Mi
kube-system coredns-56b56b56cd-nhp58 6m 100m <none> 11Mi 70Mi 170Mi
kube-system coredns-56b56b56cd-wrmxv 7m 100m <none> 12Mi 70Mi 170Mi
gitlab-runner-l gitlab-runner-l-gitlab-runner-6b8b85f87f-9knnx 3m 100m 200m 10Mi 128Mi 256Mi
gitlab-runner-m gitlab-runner-m-gitlab-runner-6bfd5d6c84-t5nrd 7m 100m 200m 13Mi 128Mi 256Mi
gitlab-runner-mda gitlab-runner-mda-gitlab-runner-59bb66c8dd-bd9xw 4m 100m 200m 17Mi 128Mi 256Mi
gitlab-runner-ops gitlab-runner-ops-gitlab-runner-7c5b85dc97-zkb4c 3m 100m 200m 12Mi 128Mi 256Mi
gitlab-runner-pst gitlab-runner-pst-gitlab-runner-6b8f9bf56b-sszlr 6m 100m 200m 20Mi 128Mi 256Mi
gitlab-runner-s gitlab-runner-s-gitlab-runner-6bbccb9b7b-dmwgl 50m 100m 200m 27Mi 128Mi 512Mi
gitlab-runner-shared gitlab-runner-shared-gitlab-runner-688d57477f-qgs2z 3m <none> <none> 15Mi <none> <none>
kube-system kube-proxy-5b65t 15m 100m <none> 19Mi <none> <none>
kube-system kube-proxy-7qsgh 12m 100m <none> 24Mi <none> <none>
kube-system kube-proxy-gn2qg 13m 100m <none> 23Mi <none> <none>
kube-system kube-proxy-pz7fp 15m 100m <none> 18Mi <none> <none>
kube-system kube-proxy-vdjqt 15m 100m <none> 23Mi <none> <none>
kube-system kube-proxy-x4xtp 19m 100m <none> 15Mi <none> <none>
kube-system kube-proxy-xlpn7 0m 100m <none> 0Mi <none> <none>
metrics-server metrics-server-5875c7d795-bj7cq 5m 200m 500m 29Mi 200Mi 500Mi
metrics-server metrics-server-5875c7d795-jpjjn 7m 200m 500m 29Mi 200Mi 500Mi
gitlab-runner-s runner-heq8ujaj-project-10386-concurrent-06t94f <wait> 200m,100m 200m,200m <wait> 200Mi,128Mi 500Mi,500Mi
gitlab-runner-s runner-heq8ujaj-project-10386-concurrent-10lpn9j 1m 200m,100m 200m,200m 12Mi 200Mi,128Mi 500Mi,500Mi
gitlab-runner-s runner-heq8ujaj-project-10386-concurrent-11jrxfh <wait> 200m,100m 200m,200m <wait> 200Mi,128Mi 500Mi,500Mi
gitlab-runner-s runner-heq8ujaj-project-10386-concurrent-129hpvl 1m 200m,100m 200m,200m 12Mi 200Mi,128Mi 500Mi,500Mi
gitlab-runner-s runner-heq8ujaj-project-10386-concurrent-13kswg8 1m 200m,100m 200m,200m 12Mi 200Mi,128Mi 500Mi,500Mi
gitlab-runner-s runner-heq8ujaj-project-10386-concurrent-15qhp5w <wait> 200m,100m 200m,200m <wait> 200Mi,128Mi 500Mi,500Mi
泚ç®ãã¹ãç¹ïŒCPUæ¶è²»éã¯æ¬¡ã®ããã«äžŠã¹æ¿ããããšãã§ããŸãã
| awk 'NR<2{print $0;next}{print $0| "sort --key 3 --numeric -b --reverse"}
ããã¯Macã§åäœããŸã-Linuxã§ãåäœãããã©ããã¯ããããŸããïŒçµåã䞊ã¹æ¿ããªã©ã®ããïŒã
ããŸãããã°ãkubectlããã®ããã®è¯ããã¥ãŒãååŸãããŸã§ã誰ããããã䜿çšã§ããŸãã
ç§ã¯kube-capacityã§è¯ãçµéšãããŠããŸãã
äŸïŒ
kube-capacity --util
NODE CPU REQUESTS CPU LIMITS CPU UTIL MEMORY REQUESTS MEMORY LIMITS MEMORY UTIL
* 560m (28%) 130m (7%) 40m (2%) 572Mi (9%) 770Mi (13%) 470Mi (8%)
example-node-1 220m (22%) 10m (1%) 10m (1%) 192Mi (6%) 360Mi (12%) 210Mi (7%)
example-node-2 340m (34%) 120m (12%) 30m (3%) 380Mi (13%) 410Mi (14%) 260Mi (9%)
ãã®ããŒã«ãæ¬åœã«åœ¹ç«ã€ããã«ã¯ãã¯ã©ã¹ã¿ãŒã«ãããã€ãããŠãããã¹ãŠã®kubernetesããã€ã¹ãã©ã°ã€ã³ãæ€åºããããããã¹ãŠã®äœ¿çšç¶æ³ã衚瀺ããå¿ èŠããããŸãã CPU / Memã¯æããã«ååã§ã¯ãããŸããã GPUãTPUïŒæ©æ¢°åŠç¿çšïŒãIntel QATãªã©ãããããç§ãç¥ããªããã®ããããŸãã ãŸããã¹ãã¬ãŒãžã¯ã©ãã§ããïŒ äœãèŠæ±ãããäœã䜿çšãããŠããããç°¡åã«ç¢ºèªã§ããã¯ãã§ãïŒçæ³çã«ã¯IOPSã®èŠ³ç¹ãããïŒã
@ boniek83 ãGPUããªã¹ãããå¿ èŠãããããã kubectl-view-allocationsãäœæããã®ã¯
@ boniek83 ãGPUããªã¹ãããå¿ èŠãããããã kubectl-view-allocationsãäœæããã®ã¯
ç§ã¯ããªãã®ããŒã«ãç¥ã£ãŠããŸãããããŠç§ã®ç®çã®ããã«ãããã¯çŸåšå©çšå¯èœãªæé«ã®ãã®ã§ãã äœã£ãŠãããŠããããšãïŒ
ã€ãŒã¹ã¿ãŒã®åŸã«TPUããã¹ãããŠããããŸãã ãã®ããŒã¿ãããããªã°ã©ããåããWebã¢ããªåœ¢åŒã§å©çšã§ãããšäŸ¿å©ãªã®ã§ãããŒã¿ãµã€ãšã³ãã£ã¹ãã«kubernetesãžã®ã¢ã¯ã»ã¹ãèš±å¯ããå¿
èŠã¯ãããŸããã 圌ãã¯èª°ãè³æºãé£ãå°œãããŠããã®ããç¥ãããã ãã§ããã以äžã¯äœãç¥ããããªã:)
äžèšã®ããŒã«ãšã¹ã¯ãªããã¯ã©ããç§ã®ããŒãºã«åããªãã®ã§ïŒãããŠãã®åé¡ã¯ãŸã éããŠããŸã:(ïŒãç§ã¯èªåã®äºçš®ããããã³ã°ããŸããïŒ
https://github.com/eht16/kube-cargo-load
ã¯ã©ã¹ã¿å ã®PODã®æŠèŠãç°¡åã«èª¬æããæ§æãããã¡ã¢ãªèŠæ±ãšå¶éãããã³å®éã®ã¡ã¢ãªäœ¿çšéã瀺ããŸãã ã¢ã€ãã¢ã¯ãæ§æãããã¡ã¢ãªå¶éãšå®éã®äœ¿çšéã®æ¯çãææ¡ããããšã§ãã
ãããã®ã¡ã¢ãªãã³ããã°ãååŸããã«ã¯ã©ãããã°ããã§ããïŒ
ãããã¯é »ç¹ã«ãã³ã°ã¢ããããŠããŸããã
kubectl describe nodes
ãŸãã¯kubectl top nodes
ãã©ã¡ããèæ
®ããŠã¯ã©ã¹ã¿ãŒãªãœãŒã¹ã®äœ¿çšçãèšç®ããå¿
èŠããããŸããïŒ/çš®é¡ã®æ©èœ
ããŒãã䜿ã£ããã¹ãŠã®ã³ã¡ã³ããšããã¯ã¯ç§ã«ãšã£ãŠããŸããããŸããã ãŸããããŒãããŒã«ããšã®ãªãœãŒã¹ã®åèšã®ããã«ãããé«ããã¥ãŒã远跡ããããã®äœããå¿ èŠã§ãã
ããã
ããæéã«ããã£ãŠ5åããšã«ããããã®CPUãšã¡ã¢ãªã®äœ¿çšéããã°ã«èšé²ããããšæããŸãã 次ã«ããã®ããŒã¿ã䜿çšããŠExcelã§ã°ã©ããäœæããŸãã äœãæ¡ã¯ïŒ ããããšã
ããã
ã°ãŒã°ã«ãç§ãã¡å
šå¡ã«ãã®åé¡ãææããŠãããŠããããã§ãïŒ-ïŒïŒã»ãŒ5幎çµã£ãŠããŸã éããŠããããšã«å°ããã£ããããŸãããïŒãã¹ãŠã®ã·ã§ã«ã¹ããããšä»ã®ããŒã«ã«æè¬ããŸãã
ã·ã³ãã«ã§è¿ éãªããã¯ïŒ
$ kubectl describe nodes | grep 'Name:\| cpu\| memory'
Name: XXX-2-wke2
cpu 1552m (77%) 2402m (120%)
memory 2185Mi (70%) 3854Mi (123%)
Name: XXX-2-wkep
cpu 1102m (55%) 1452m (72%)
memory 1601Mi (51%) 2148Mi (69%)
Name: XXX-2-wkwz
cpu 852m (42%) 1352m (67%)
memory 1125Mi (36%) 3624Mi (116%)
ã·ã³ãã«ã§è¿ éãªããã¯ïŒ
$ kubectl describe nodes | grep 'Name:\| cpu\| memory' Name: XXX-2-wke2 cpu 1552m (77%) 2402m (120%) memory 2185Mi (70%) 3854Mi (123%) Name: XXX-2-wkep cpu 1102m (55%) 1452m (72%) memory 1601Mi (51%) 2148Mi (69%) Name: XXX-2-wkwz cpu 852m (42%) 1352m (67%) memory 1125Mi (36%) 3624Mi (116%)
ããã€ã¹ãã©ã°ã€ã³ã¯ãããŸããã 圌ãã¯ããå¿ èŠããããŸãã ãã®ãããªããã€ã¹ããªãœãŒã¹ã§ãã
ããã«ã¡ã¯ïŒ
ãã®ã¹ã¯ãªãããäœæããŠå ±æããŸããã
https://github.com/Sensedia/open-tools/blob/master/scripts/listK8sHardwareResources.sh
ãã®ã¹ã¯ãªããã«ã¯ãããã§å ±æããã¢ã€ãã¢ã®ããã€ãããŸãšãããã®ãå«ãŸããŠããŸãã ã¹ã¯ãªããã¯ã€ã³ã¯ãªã¡ã³ãã§ããä»ã®äººãããç°¡åã«ã¡ããªãã¯ãååŸããã®ã«åœ¹ç«ã¡ãŸãã
ãã³ããšã³ãã³ããå ±æããŠããã ãããããšãããããŸãã
ç§ã®ãŠãŒã¹ã±ãŒã¹ã§ã¯ãããŒãã«å
ã®ããŒãã®CPU / RAMå¶é/äºçŽãäžèŠ§è¡šç€ºããåçŽãªkubectl
ãã©ã°ã€ã³ãäœæããããšã«ãªããŸããã ãŸããçŸåšã®ãããã®CPU / RAMæ¶è²»éïŒ kubectl top pods
ïŒããã§ãã¯ããŸãããCPUã«ããåºåãéé ã§äžŠã¹æ¿ããŸãã
ããã¯ä»ã®äœããã䟿å©ãªããšã§ãããããããä»ã®èª°ããããã圹ç«ã€ãšæãã§ãããã
ãã£ãšãã¯ã©ã¹ã¿ãŒå šäœã®çŸåšã®å šäœçãªCPU䜿çšçãé©åã«èšç®ããããã®ãkubernetesããŒã ããã®å·šå€§ãªã¹ã¬ããã§ããããŸã é©åãªãœãªã¥ãŒã·ã§ã³ã¯ãããŸãããïŒ
minikubeã§ãããå®è¡ãããå Žåã¯ãæåã«ã¡ããªãã¯ãµãŒããŒã¢ããªã³ãæå¹ã«ããŠãã ããminikube addons enable metrics-server
次ã«ã³ãã³ããå®è¡ããŸãkubectl top nodes
Krewã䜿çšããŠããå ŽåïŒ
kubectl krew install resource-capacity
kubectl resource-capacity
NODE CPU REQUESTS CPU LIMITS MEMORY REQUESTS MEMORY LIMITS
* 16960m (35%) 18600m (39%) 26366Mi (14%) 3100Mi (1%)
ip-10-0-138-176.eu-north-1.compute.internal 2460m (31%) 4200m (53%) 567Mi (1%) 784Mi (2%)
ip-10-0-155-49.eu-north-1.compute.internal 2160m (27%) 2200m (27%) 4303Mi (14%) 414Mi (1%)
ip-10-0-162-84.eu-north-1.compute.internal 3860m (48%) 3900m (49%) 8399Mi (27%) 414Mi (1%)
ip-10-0-200-101.eu-north-1.compute.internal 2160m (27%) 2200m (27%) 4303Mi (14%) 414Mi (1%)
ip-10-0-231-146.eu-north-1.compute.internal 2160m (27%) 2200m (27%) 4303Mi (14%) 414Mi (1%)
ip-10-0-251-167.eu-north-1.compute.internal 4160m (52%) 3900m (49%) 4491Mi (14%) 660Mi (2%)
æãåèã«ãªãã³ã¡ã³ã
ãããïŒ