admin管理员组

文章数量:1356017

I have two questions regarding deploying a local Kubernetes cluster using K3D and Helm.

I have successfully built a local registry and cluster on K3D using the commands k3d registry create registry.localhost -p 5000 and k3d cluster create c1 --registry-use k3d-registry.localhost:5000. I set the imagePullPolicy to Always, and it works.

However, I have to build the container, then push and pull it again (I use Helm) every time I want to test the service locally. To skip the push-pull process, I tried setting the imagePullPolicy to Never so that Helm would use the local container I just built. But I got failed ErrImageNeverPull like this:

NAME                                 READY   STATUS              RESTARTS   AGE
webtest-deployment-7fb8ccb485-7vpxz   0/1     ErrImageNeverPull   0          21s

So, how can I make the deployment successful without the push-pull process to the registry by setting imagePullPolicy to Never (just use the local image after it’s built)?

The second issue is that when I made changes or revisions to the project files, build a new Docker image and push it to the local registry. However, when I update the deployment using helm upgrade <release> <chart> or helm upgrade <release> <chart> --force, the changes do not take effect. Additionally, the pods are not replaced either before or after the upgrade. To apply the changes, I have to reinstall the package by running helm uninstall followed by helm install. Is this behavior common in Helm deployments, or am I missing a step to properly upgrade the service via Helm?

related question: Local Kubernetes Deployment using k3d - where should I push the docker images to?

I have two questions regarding deploying a local Kubernetes cluster using K3D and Helm.

I have successfully built a local registry and cluster on K3D using the commands k3d registry create registry.localhost -p 5000 and k3d cluster create c1 --registry-use k3d-registry.localhost:5000. I set the imagePullPolicy to Always, and it works.

However, I have to build the container, then push and pull it again (I use Helm) every time I want to test the service locally. To skip the push-pull process, I tried setting the imagePullPolicy to Never so that Helm would use the local container I just built. But I got failed ErrImageNeverPull like this:

NAME                                 READY   STATUS              RESTARTS   AGE
webtest-deployment-7fb8ccb485-7vpxz   0/1     ErrImageNeverPull   0          21s

So, how can I make the deployment successful without the push-pull process to the registry by setting imagePullPolicy to Never (just use the local image after it’s built)?

The second issue is that when I made changes or revisions to the project files, build a new Docker image and push it to the local registry. However, when I update the deployment using helm upgrade <release> <chart> or helm upgrade <release> <chart> --force, the changes do not take effect. Additionally, the pods are not replaced either before or after the upgrade. To apply the changes, I have to reinstall the package by running helm uninstall followed by helm install. Is this behavior common in Helm deployments, or am I missing a step to properly upgrade the service via Helm?

related question: Local Kubernetes Deployment using k3d - where should I push the docker images to?

Share Improve this question edited Mar 28 at 9:53 ansuf asked Mar 28 at 3:20 ansufansuf 3414 silver badges12 bronze badges 1
  • 1 Your local Docker environment and your local Kubernetes environment are separate: k3d runs a Kubernetes installation in a container, and the Kubernetes node has its own "local" packages. That's where the registry comes in. The other symptoms you describe come from reusing the same image tag, a Deployment generally keys on the text of image: to know whether it needs to redeploy Pods or not, so if you can use a different tag for each build and helm upgrade --set tag=... it may run smoother. – David Maze Commented Mar 28 at 10:08
Add a comment  | 

1 Answer 1

Reset to default 1

If I got you right you want to test custom images with existing helm charts without changing the helm chart or the hassle of setting up a registry and/or doing all the build/push/pull/imagePullSecrets stuff. This can be achieved using a clever combination of k3d and tilt's features and would go like this:

  1. The image is built by tilt on every change in the context directory.
  2. It is then automatically pushed by tilt into the registry created by k3d, which tilt auto-detects.
  3. Then tilt "injects" the newly built image into the helm chart.

For the sake of this example, let's assume that you want to deploy an nginx image containing a custom web site for your company with the bitnami helm chart for nginx.

Directory structure

.
├── Tiltfile
├── image
│   ├── Dockerfile
│   └── index.html
└── k3d.yaml

image/Dockerfile

FROM bitnami/nginx:1.27.4-debian-12-r6
# Allow modifications to the image
USER 0 
# Just an example for a custom image
ADD index.html /app/index.html
# Run nginx as a non-root user
USER 1001

Nothing much to see here. The HTML file is even more meaningless, so I leave it out.

k3d.yaml

Also, not much of a surprise. However: tilt will automatically detect the registry created and be able to push images to it, so there is no need to adjust insecure_registries in your Docker settings. k3d in turn is able to pull images from said insecure registry, so we have the complete "build->push->pull" cycle.

apiVersion: k3d.io/v1alpha5 
kind: Simple
metadata:
  name: demo # name that you want to give to your cluster (will still be prefixed with `k3d-`)
servers: 1 # same as `--servers 1`
agents: 1 # same as `--agents 1`
image: rancher/k3s:v1.29.15-k3s1 # same as `--image rancher/k3s:v1.29.15-k3s1`
ports:
  - port: 8080:80 # same as `--port '8080:80@loadbalancer'`
    nodeFilters:
      - loadbalancer
  - port: 8443:443 # same as `--port '8443:443@loadbalancer'`
    nodeFilters:
      - loadbalancer
registries: # define how registries should be created or used
  create: # creates a default registry to be used with the cluster; same as `--registry-create localregistry`
    name: localregistry
    host: "0.0.0.0"
    hostPort: "5000"
options:
  k3d: # k3d runtime settings
    wait: true # wait for cluster to be usable before returining; same as `--wait` (default: true)
    timeout: "180s" # wait timeout before aborting; same as `--timeout 60s`
  kubeconfig:
    updateDefaultKubeconfig: true # add new cluster to your default Kubeconfig; same as `--kubeconfig-update-default` (default: true)
    switchCurrentContext: true # also set current-context to the new cluster's context; same as `--kubeconfig-switch-context` (default: true)

Tiltfile

# Build the custom image and push it to the local registry,
# which for k3d is autodetected by tilt.
# The image is built using the Dockerfile in the 'image' directory.
# Note that the docker image is rebuilt and the whole deployment starts over
# if the Dockerfile if any of the files in the `context` directory changes.
docker_build("company/custom_nginx",context="image")

# Load an extension to conveniently deal with the helm chart.
load("ext://helm_resource", "helm_resource", "helm_repo")

# We first need to load the helm repo
# and then we can load the helm resource.
helm_repo('bitnami',url="https://charts.bitnami/bitnami")

# Install the actual release.
# The image_deps are the images that are built before the helm resource is created.
# The image_keys are the keys that are used to inject the local image into the helm release.
helm_resource(
    'nginx-release',
    chart='bitnami/nginx',resource_deps=['bitnami'],
     flags=['--set=global.security.allowInsecureImages=true'],
    # THIS is where the magic happens:
    # 'helm_ressource'
    image_deps=['company/custom_nginx'],
    image_keys=[('image.registry', 'image.repository', 'image.tag')],
    )
   

What is happening?

Some parts of the following will be [redacted] for privacy reasons.

After installing tilt, we can run k3d cluster create --config k3d.yaml && tilt up and watch the logs in tilt's UI.

  1. Tiltfile is parsed

    Loading Tiltfile at: [redacted]/Tiltfile
    Successfully loaded Tiltfile (1.295717079s)
    Auto-detected local registry from environment: &RegistryHosting{Host:localhost:5000,HostFromClusterNetwork:localregistry:5000,HostFromContainerRuntime:localregistry:5000,Help:https://k3d.io/stable/usage/registries/#using-a-local-registry,SingleName:,}
    

    Note that tilt indeed detected the registry we just created.

  2. The bitnami helm repo is added

    Running cmd: helm repo add bitnami https://charts.bitnami/bitnami --force-update
    "bitnami" has been added to your repositories
    

    Since we added the repo in the resource_deps of the helm_resource, the helm release will not be deployed before the helm repo was successfully added.

  3. The helm release is deployed.

    This is where it get's interesting. Since we declared the docker image company/custom_nginx in the image_deps of the helm resource and instructed the helm_resource where to use said image via image_keys, the image's values will be substituted:

    STEP 1/3 — Building Dockerfile: [company/custom_nginx]
    Building Dockerfile for platform linux/amd64:
    [...]
    STEP 2/3 — Pushing localhost:5000/company_custom_nginx:tilt-2de2a5b04212dc59
         Pushing with Docker client
         Authenticating to image repo: localhost:5000
         [...]
     STEP 3/3 — Deploying
          [...]
          Running cmd: ['helm', 'upgrade', '--install', '--set=global.security.allowInsecureImages=true', '--set', 'image.registry=localregistry:5000', '--set', 'image.repository=company_custom_nginx', '--set', 'image.tag=tilt-2de2a5b04212dc59', 'nginx-release', 'bitnami/nginx']
          Release "nginx-release" does not exist. Installing it now.
          NAME: nginx-release
          LAST DEPLOYED: Fri Mar 28 17:56:07 2025
          NAMESPACE: default
          STATUS: deployed
          REVISION: 1
          TEST SUITE: None
          NOTES:
          CHART NAME: nginx
          CHART VERSION: 19.0.3
          APP VERSION: 1.27.4
    

Conclusion

Using tilt and some 4 lines of configuration, you not only can test your custom images easily, but can do so continuously, since the image will be rebuilt and redeployed each time there is a change in context directory of the image. And all this with two simple commands. Do not believe me? "All" the code is available on GitHub. Clone and try ;).

本文标签: