admin管理员组文章数量:1125076
I'm trying to deploy a pod of three containers (Vue, Express and MongoDB) on GKE using GitHub Actions. When deployed manually using the following commands:
kubectl apply -f deployment-sit.yaml
kubectl apply -f vue-service-sit.yaml
kubectl apply -f express-service-sit.yaml
kubectl apply -f mongodb-service-sit.yaml
kubectl apply -f sit-ingress.yaml
everthing worked out fine.
But when I tried to run the CI pipline that used exactly the same docker-compose file and Dockerfile for image build, the image seems to be built in away that the Express container just kept crashing. When the container crashed, not much warning or error messages was available other than:
stream closed EOF for default/seg-dashboard-sit-6c9d6d5798-lscd5 (seg-dashboard-sit-express)
and
Warning BackOff 3m30s (x24 over 8m26s) kubelet Back-off restarting failed container seg-dashboard-sit-express in pod seg-dashboard-sit-6c9d6d5798-lscd5_default(7042eb1d-c7bc-455f-bfc5-4159733aa00e)
I'm pretty sure the problem happened in the build stage rather than the deploy stage since the built image wouldn't run properly when i tried to deploy the images built from the CI pipeline.
Here are my YAML files for reference, can anyone please help me out with this specific case?
express docker-compose.yaml
version: '3.8'
services:
api:
build:
context: .
dockerfile: Dockerfile
container_name: seg-dashboard-vue
env_file:
- ./.env.production
environment:
- NODE_ENV=production
- HOSTNAME=0.0.0.0
- BASE_URL=/seg-dashboard/api
- OPSGENIE_APIKEY=4075da9d-07b2-4db3-9388-2a1b7c851c78
- OPSGENIE_APIURL=
image: asia-east1-docker.pkg.dev/visitor-access-system/tools/seg-dashboard-express:latest
platform: linux/amd64
express Dockerfile
FROM node:lts-jod AS build
LABEL maintainer="[email protected]" version="1.0"
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 11200
ENV NODE_ENV=production
ENV HOSTNAME=0.0.0.0
ENV BASE_URL=/seg-dashboard/api
ENV OPSGENIE_APIKEY=4075da9d-07b2-4db3-9388-2a1b7c851c78
ENV OPSGENIE_APIURL=
CMD ["npm", "start"]
ci-pipeline
name: SEG Dashboard CI/CD Pipeline
on:
push:
branches:
# - main
- develop
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '21'
- name: Install Docker Compose
run: |
sudo apt-get update
sudo apt-get install -y docker-compose
- name: Build Docker images
run: |
docker-compose -f vue-app/docker-compose.yaml build --no-cache
docker-compose -f express-app/docker-compose.yaml build --no-cache
- name: Log in to Google Container Registry
uses: google-github-actions/auth@v2
with:
credentials_json: ${{ secrets.GCP_SA_KEY }}
- name: Configure Docker to use the gcloud command-line tool as a credential helper
run: gcloud auth configure-docker asia-east1-docker.pkg.dev
- name: Push Docker images to Google Container Registry
run: |
docker push asia-east1-docker.pkg.dev/${{ secrets.GCP_PROJECT_ID }}/tools/seg-dashboard-vue:latest
docker push asia-east1-docker.pkg.dev/${{ secrets.GCP_PROJECT_ID }}/tools/seg-dashboard-express:latest
deploy:
needs: build
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Log in to Google Container Registry
uses: google-github-actions/auth@v2
with:
credentials_json: ${{ secrets.GCP_SA_KEY }}
- name: Set up gcloud Cloud SDK environment
uses: google-github-actions/[email protected]
with:
project_id: ${{ secrets.GCP_PROJECT_ID }}
install_components: 'kubectl,gke-gcloud-auth-plugin'
- name: Configure gcloud and kubectl
run: |
gcloud container clusters get-credentials ${{ secrets.GKE_CLUSTER_NAME }} --zone ${{ secrets.GKE_CLUSTER_LOCATION }} --project ${{ secrets.GCP_PROJECT_ID }}
gcloud config set project ${{ secrets.GCP_PROJECT_ID }}
kubectl config current-context
- name: Deploy to Kubernetes
run: |
kubectl apply -f deployment-sit.yaml
kubectl apply -f vue-service-sit.yaml
kubectl apply -f express-service-sit.yaml
kubectl apply -f mongodb-service-sit.yaml
kubectl apply -f sit-ingress.yaml
kubectl rollout restart deployment seg-dashboard-sit
I have tried giving enough permissions on GCP AMI, checking the env variables, checking the healthcheck path, including the livenessProbe check and removing it.
I expecting the containers run without crashing.
本文标签: kubernetesGKE container kept crashing when deployed with GitHub Actions workflowStack Overflow
版权声明:本文标题:kubernetes - GKE container kept crashing when deployed with GitHub Actions workflow - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1736656071a1946257.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论