admin管理员组文章数量:1122846
I am performing a performance test on a microservice application built in .NET 8.0, and we have recently encountered an issue. When I set a CPU limit on our pods, they begin to restart as soon as the application reaches 20 transactions per second (TPS).
I have monitored the situation using Dynatrace and various kubectl commands to check CPU and memory utilization, and I confirmed that resource usage is not exceeding the configured 60% threshold—it's even staying below 40% before the pods restart.
Despite my thorough investigation into this issue, I have not been able to find a solution. Any insights or guidance on how to resolve this issue would be greatly appreciated!
Please note when I remove the CPU limit from the deployment file, PODS are scaling correctly and there is no restart.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-container
image: my-app-image:latest
resources:
requests:
memory: "1512Mi"
cpu: "2" # Request for CPU
limits:
memory: "2Gi"
cpu: "4" # Limit for CPU
ports:
- containerPort: 80
I am performing a performance test on a microservice application built in .NET 8.0, and we have recently encountered an issue. When I set a CPU limit on our pods, they begin to restart as soon as the application reaches 20 transactions per second (TPS).
I have monitored the situation using Dynatrace and various kubectl commands to check CPU and memory utilization, and I confirmed that resource usage is not exceeding the configured 60% threshold—it's even staying below 40% before the pods restart.
Despite my thorough investigation into this issue, I have not been able to find a solution. Any insights or guidance on how to resolve this issue would be greatly appreciated!
Please note when I remove the CPU limit from the deployment file, PODS are scaling correctly and there is no restart.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-container
image: my-app-image:latest
resources:
requests:
memory: "1512Mi"
cpu: "2" # Request for CPU
limits:
memory: "2Gi"
cpu: "4" # Limit for CPU
ports:
- containerPort: 80
Share
Improve this question
edited Nov 26, 2024 at 6:03
Mysterious288
asked Nov 22, 2024 at 18:47
Mysterious288Mysterious288
4271 gold badge11 silver badges33 bronze badges
8
|
Show 3 more comments
1 Answer
Reset to default -1You can run out of more resources than just CPU. Pick a node, any node that is exhibiting the behavior. Profile for the highest cost request in terms of total response time. Take a look at the variance (standard deviation) of the requests. The cost plus the variance combined are prime indicators of being bound on a member of the finite resource pool.
Once you understand the highest cost item that is most likely the root of the requests locking up resources that other items cannot (or have to wait to) access, then pull in the deep diagnostic superhero tool (Dynatrace) to drill baby drill on that request to profile all of the calls for the call with the highest cost and variance. This is likely your root problem. Optimize that!
版权声明:本文标题:docker - Kubernetes Pods restarting instead of scaling while performing performance execution - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1736301506a1931199.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
kubectl describe <NAME OF POD>
. It should show the last exit code of the last time it crashed and related events. – Jason Snouffer Commented Nov 29, 2024 at 0:08