admin管理员组

文章数量:1336311

After setting up a CI/CD pipeline using GitLab, Harbor, and Rancher, I identified an issue with the deployment of my application where it didn't function properly.

However, when deploying manually (building the image on my local machine, pushing it manually to Harbor, and applying it to the Rancher deployment), the application works perfectly. One thing I noticed is that the image in question has different sizes when I push it locally versus when it's built and sent through the pipeline. Through the pipeline, it's always smaller. This leads me to believe there might be some kind of caching issue.

Pipeline:

variables:
  HARBOR_URL: "harbor.domain"
  PROJECT_NAME: "ciencia_de_dados/cde"
  KUBECONFIG: "${KUBECONFIG}"

stages:
  - prepare
  - build
  - push
  - deploy

prepare_kubeconfig:
  stage: prepare
  image: bitnami/minideb:latest
  tags:
    - docker
    - rancher
  script:
    - apt-get update && apt-get install -y curl
    - echo "$KUBECONFIG" | base64 -d > temp_kubeconfig

.build_and_push_before_script: &build_and_push_before_script
  - echo "$HARBOR_PASSWORD" | docker login -u "$HARBOR_USER" --password-stdin "$HARBOR_URL"

build_image:
  stage: build
  image: docker:latest
  tags:
    - docker
    - rancher
  before_script:
    *build_and_push_before_script
  script:
    - ls -R
    - docker build --no-cache -t "$HARBOR_URL/$PROJECT_NAME:latest" -f Dockerfile .
  only:
    - develop

push_image:
  stage: push
  image: docker:latest
  tags:
    - docker
  before_script:
    *build_and_push_before_script
  script:
    - docker push "$HARBOR_URL/$PROJECT_NAME:latest"
  only:
    - develop

deploy_to_rancher:
  stage: deploy
  image: bitnami/minideb:latest
  tags:
    - rancher
  before_script:
    - apt-get update && apt-get install -y curl
    - curl -LO "/$(curl -L -s .txt)/bin/linux/amd64/kubectl"
    - install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
    - echo "$KUBECONFIG" | base64 -d > temp_kubeconfig
  script:
    - curl -v -k ":443/k8s/clusters/c-m-cn8gmtcs/version"
    - KUBECONFIG=temp_kubeconfig kubectl create namespace cde --dry-run=client -o yaml | KUBECONFIG=temp_kubeconfig kubectl apply -f -
    - KUBECONFIG=temp_kubeconfig kubectl apply -f deployment.yaml --validate=false
  only:
    - develop

Dockerfile:

FROM python:3.9.19-bullseye
RUN apt-get update && \
    apt-get install -y libaio1 && \
    rm -rf /var/lib/apt/lists/*
WORKDIR /app/cde
COPY requirements.txt .
RUN pip install --trusted-host pypi --trusted-host pypi.python --trusted-host files.pythonhosted --no-cache-dir --upgrade pip && \
    pip install --trusted-host pypi --trusted-host pypi.pythonhosted --trusted-host files.pythonhosted -r requirements.txt
COPY instantclient_21_12 /opt/oracle/instantclient_21_12
ENV LD_LIBRARY_PATH=/opt/oracle/instantclient_21_12
ENV TNS_ADMIN=/opt/oracle/instantclient_21_12/network/admin
COPY . .
EXPOSE 8000
CMD ["bash", "-c", "python manage.py runserver 0.0.0.0:8000"]

I checked the .dockerignore file to ensure there was nothing in my repository that could cause build failures. I reviewed the Dockerfile and validated its configurations. I added the --no-cache flag to the build image step. I also included a no-cache step in the runner that executes this job. Additionally, I checked for configuration and authentication issues with Harbor.

After setting up a CI/CD pipeline using GitLab, Harbor, and Rancher, I identified an issue with the deployment of my application where it didn't function properly.

However, when deploying manually (building the image on my local machine, pushing it manually to Harbor, and applying it to the Rancher deployment), the application works perfectly. One thing I noticed is that the image in question has different sizes when I push it locally versus when it's built and sent through the pipeline. Through the pipeline, it's always smaller. This leads me to believe there might be some kind of caching issue.

Pipeline:

variables:
  HARBOR_URL: "harbor.domain"
  PROJECT_NAME: "ciencia_de_dados/cde"
  KUBECONFIG: "${KUBECONFIG}"

stages:
  - prepare
  - build
  - push
  - deploy

prepare_kubeconfig:
  stage: prepare
  image: bitnami/minideb:latest
  tags:
    - docker
    - rancher
  script:
    - apt-get update && apt-get install -y curl
    - echo "$KUBECONFIG" | base64 -d > temp_kubeconfig

.build_and_push_before_script: &build_and_push_before_script
  - echo "$HARBOR_PASSWORD" | docker login -u "$HARBOR_USER" --password-stdin "$HARBOR_URL"

build_image:
  stage: build
  image: docker:latest
  tags:
    - docker
    - rancher
  before_script:
    *build_and_push_before_script
  script:
    - ls -R
    - docker build --no-cache -t "$HARBOR_URL/$PROJECT_NAME:latest" -f Dockerfile .
  only:
    - develop

push_image:
  stage: push
  image: docker:latest
  tags:
    - docker
  before_script:
    *build_and_push_before_script
  script:
    - docker push "$HARBOR_URL/$PROJECT_NAME:latest"
  only:
    - develop

deploy_to_rancher:
  stage: deploy
  image: bitnami/minideb:latest
  tags:
    - rancher
  before_script:
    - apt-get update && apt-get install -y curl
    - curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
    - install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
    - echo "$KUBECONFIG" | base64 -d > temp_kubeconfig
  script:
    - curl -v -k "https://rancherhml.domain:443/k8s/clusters/c-m-cn8gmtcs/version"
    - KUBECONFIG=temp_kubeconfig kubectl create namespace cde --dry-run=client -o yaml | KUBECONFIG=temp_kubeconfig kubectl apply -f -
    - KUBECONFIG=temp_kubeconfig kubectl apply -f deployment.yaml --validate=false
  only:
    - develop

Dockerfile:

FROM python:3.9.19-bullseye
RUN apt-get update && \
    apt-get install -y libaio1 && \
    rm -rf /var/lib/apt/lists/*
WORKDIR /app/cde
COPY requirements.txt .
RUN pip install --trusted-host pypi. --trusted-host pypi.python. --trusted-host files.pythonhosted. --no-cache-dir --upgrade pip && \
    pip install --trusted-host pypi. --trusted-host pypi.pythonhosted. --trusted-host files.pythonhosted. -r requirements.txt
COPY instantclient_21_12 /opt/oracle/instantclient_21_12
ENV LD_LIBRARY_PATH=/opt/oracle/instantclient_21_12
ENV TNS_ADMIN=/opt/oracle/instantclient_21_12/network/admin
COPY . .
EXPOSE 8000
CMD ["bash", "-c", "python manage.py runserver 0.0.0.0:8000"]

I checked the .dockerignore file to ensure there was nothing in my repository that could cause build failures. I reviewed the Dockerfile and validated its configurations. I added the --no-cache flag to the build image step. I also included a no-cache step in the runner that executes this job. Additionally, I checked for configuration and authentication issues with Harbor.

Share Improve this question asked Nov 20, 2024 at 4:31 Héricles FranciscoHéricles Francisco 113 bronze badges 1
  • (i) Update your question and explain what ' it didn't function properly' means exactly. (ii) Consider using the Instant Client 19c or 23ai LTS releases instead of 21c. (iii) Use ldconfig instead of setting LD_LIBRARY_PATH. (iv) There's no need to set TNS_ADMIN since you are setting it to the default location. – Christopher Jones Commented Nov 27, 2024 at 4:35
Add a comment  | 

1 Answer 1

Reset to default 0

After a detailed investigation, I identified the root cause and implemented the following solution:

Root Cause The issue was related to the way Docker handles build and push operations within the CI/CD pipeline:

When using traditional docker build and docker push commands, the caching and build layers weren’t being handled properly within the pipeline. Additionally, Docker-in-Docker (DinD) setups can sometimes create conflicts, especially with layer handling and authentication when interacting with private registries like Harbor. Solution To resolve this, I switched to using Kaniko, a tool designed specifically for building and pushing Docker images in containerized CI/CD environments without requiring Docker Daemon access.

Steps to Resolve Update Your CI/CD Pipeline: Replace the traditional docker build and docker push commands with Kaniko. Here's the updated pipeline script:

stages:
  - build

build_image:
  stage: build
  tags:
    - rancher
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: ['']
  variables:
    DOCKER_CONFIG: /kaniko/.docker
    VERSION_TAG: latest
  script:
    - mkdir -p /kaniko/.docker
    - echo "{\"auths\":{\"${HARBOR_URL}\":{\"auth\":\"$(echo -n ${HARBOR_USERNAME}:${HARBOR_PASSWORD} | base64 -w 0)\"}}}" > /kaniko/.docker/config.json
    - >-
      /kaniko/executor
      --context "${CI_PROJECT_DIR}"
      --dockerfile "${CI_PROJECT_DIR}/Dockerfile"
      --destination "${HARBOR_URL}/${HARBOR_PROJECT}/${CI_PROJECT_NAME}:${VERSION_TAG}"

Explanation:

The gcr.io/kaniko-project/executor:debug image is used for building and pushing Docker images. The auths JSON block configures the Harbor registry authentication dynamically using pipeline variables. Kaniko builds the image and pushes it directly to Harbor. Configure Environment Variables: Ensure the following variables are correctly set in your CI/CD tool:

HARBOR_URL: The URL of your Harbor instance. HARBOR_USERNAME: Your Harbor username with push permissions. HARBOR_PASSWORD: The corresponding password or access token. HARBOR_PROJECT: The project in Harbor where the image will be stored. VERSION_TAG: Tag for your image. Benefits of Kaniko:

Kaniko builds images in a secure, containerized environment without requiring a Docker Daemon. Proper handling of multi-layer builds ensures that all layers are preserved.

It simplifies the interaction with private registries like Harbor. Outcome

After implementing this solution, my pipeline successfully built and pushed images to Harbor with the correct size and functionality. The discrepancy between manual and pipeline pushes was eliminated.

本文标签: dockerImages Sent to Harbor locally with Size X and via CICD Pipeline with Size YStack Overflow