admin管理员组

文章数量:1278947

We need to deploy to a cluster of VMs that are a docker swarm on prem. We use azure pipelines as CI/CD tool. Until we deploy to main, everything works as a charm since we use one VM for every env except from main, that uses multiple VMs. The flow is, copy over ssh necessary files to the VM, then run an ssh task to the destination vm executing the deploy commands need. So the part of the pipeline that deploys for main is like that:

- job: Deploy_to_VM1
    condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
    dependsOn: 
      - Download_Artifact_and_Version
    variables:
      version: $[ dependencies.Download_Artifact_and_Version.outputs['setVersionVar.var'] ]
    displayName: 'Deploy to VM1'
    steps:
      - task: CopyFilesOverSSH@0
        displayName: 'Upload service files to vm1'
        inputs:
          sshEndpoint: 'sshvm1'
          sourceFolder: '$(System.DefaultWorkingDirectory)/docker-compose'
          contents: |
        config.yml
            docker-compose.yml
          targetFolder: '/tmp'
          cleanTargetFolder: false
          overwrite: true
          readyTimeout: '20000'
    
      - task: SSH@0
        inputs:
          sshEndpoint: 'sshvm1'
          runOptions: 'inline'
          inline: |
        export version=$(version)
            echo "$(REGISTRY_PASSWORD)" | docker login -u $(REGISTRY_USER) --password-stdin $(REGISTRY_URL)
            docker stack deploy -c /tmp/docker-compose.yml service
        condition: succeeded()
        name: deploy_vm1

  - job: Deploy_to_VM2
    dependsOn: 
      - Download_Artifact_and_Version
      - Deploy_to_VM1
    condition: eq(dependencies.Deploy_to_VM1.result, 'Failed')
    variables:
      version: $[ dependencies.Download_Artifact_and_Version.outputs['setVersionVar.var'] ]
    displayName: 'Deploy to VM2'
    steps:
      - task: CopyFilesOverSSH@0
        displayName: 'Upload service files to vm2'
        inputs:
          sshEndpoint: 'sshvm2'
          sourceFolder: '$(System.DefaultWorkingDirectory)/docker-compose'
          contents: |
        config.yml
            docker-compose.yml
          targetFolder: '/tmp'
          cleanTargetFolder: false
          overwrite: true
          readyTimeout: '20000'
    
      - task: SSH@0
        inputs:
          sshEndpoint: 'sshvm2'
          runOptions: 'inline'
          inline: |
            export version=$(version)
            echo "$(REGISTRY_PASSWORD)" | docker login -u $(REGISTRY_USER) --password-stdin $(REGISTRY_URL)  
            docker stack deploy -c /tmp/docker-compose.yml service
        condition: succeeded()
        name: deploy_vm2

This piece of pipeline always has as an output 2 errors: ##[error]WARNING! Your password will be stored unencrypted in /***/.docker/config.json.

##[error]Since --detach=false was not specified, tasks will be created in the background. In a future release, --detach=false will become the default.

But the pipelines run successfully. Issue is that we changed vms now. And this piece of pipeline worked as expected and deployed on the first available VM for one of our services with status "partially succeeded" and for another service, the 1st job will deploy successfully, but will flag it as failed and it will continue to the 2nd vm, will deploy successfullyagain, mark it as failed, and same goes for every job in the list of VMs.

Does anybody have any insight on what is happening or has a better deployment strategy for a cluster of VMs on prem using ssh connections?

I was also looking at the strategy job that takes as input a list of inputs giving dynamic the ssh service connection but first issue was i don't know how to stop when the first one succeeds and the second issue that i cannot pass the service ssh endpoint dynamically. So i do not know if it fits my need

We need to deploy to a cluster of VMs that are a docker swarm on prem. We use azure pipelines as CI/CD tool. Until we deploy to main, everything works as a charm since we use one VM for every env except from main, that uses multiple VMs. The flow is, copy over ssh necessary files to the VM, then run an ssh task to the destination vm executing the deploy commands need. So the part of the pipeline that deploys for main is like that:

- job: Deploy_to_VM1
    condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
    dependsOn: 
      - Download_Artifact_and_Version
    variables:
      version: $[ dependencies.Download_Artifact_and_Version.outputs['setVersionVar.var'] ]
    displayName: 'Deploy to VM1'
    steps:
      - task: CopyFilesOverSSH@0
        displayName: 'Upload service files to vm1'
        inputs:
          sshEndpoint: 'sshvm1'
          sourceFolder: '$(System.DefaultWorkingDirectory)/docker-compose'
          contents: |
        config.yml
            docker-compose.yml
          targetFolder: '/tmp'
          cleanTargetFolder: false
          overwrite: true
          readyTimeout: '20000'
    
      - task: SSH@0
        inputs:
          sshEndpoint: 'sshvm1'
          runOptions: 'inline'
          inline: |
        export version=$(version)
            echo "$(REGISTRY_PASSWORD)" | docker login -u $(REGISTRY_USER) --password-stdin $(REGISTRY_URL)
            docker stack deploy -c /tmp/docker-compose.yml service
        condition: succeeded()
        name: deploy_vm1

  - job: Deploy_to_VM2
    dependsOn: 
      - Download_Artifact_and_Version
      - Deploy_to_VM1
    condition: eq(dependencies.Deploy_to_VM1.result, 'Failed')
    variables:
      version: $[ dependencies.Download_Artifact_and_Version.outputs['setVersionVar.var'] ]
    displayName: 'Deploy to VM2'
    steps:
      - task: CopyFilesOverSSH@0
        displayName: 'Upload service files to vm2'
        inputs:
          sshEndpoint: 'sshvm2'
          sourceFolder: '$(System.DefaultWorkingDirectory)/docker-compose'
          contents: |
        config.yml
            docker-compose.yml
          targetFolder: '/tmp'
          cleanTargetFolder: false
          overwrite: true
          readyTimeout: '20000'
    
      - task: SSH@0
        inputs:
          sshEndpoint: 'sshvm2'
          runOptions: 'inline'
          inline: |
            export version=$(version)
            echo "$(REGISTRY_PASSWORD)" | docker login -u $(REGISTRY_USER) --password-stdin $(REGISTRY_URL)  
            docker stack deploy -c /tmp/docker-compose.yml service
        condition: succeeded()
        name: deploy_vm2

This piece of pipeline always has as an output 2 errors: ##[error]WARNING! Your password will be stored unencrypted in /***/.docker/config.json.

##[error]Since --detach=false was not specified, tasks will be created in the background. In a future release, --detach=false will become the default.

But the pipelines run successfully. Issue is that we changed vms now. And this piece of pipeline worked as expected and deployed on the first available VM for one of our services with status "partially succeeded" and for another service, the 1st job will deploy successfully, but will flag it as failed and it will continue to the 2nd vm, will deploy successfullyagain, mark it as failed, and same goes for every job in the list of VMs.

Does anybody have any insight on what is happening or has a better deployment strategy for a cluster of VMs on prem using ssh connections?

I was also looking at the strategy job that takes as input a list of inputs giving dynamic the ssh service connection but first issue was i don't know how to stop when the first one succeeds and the second issue that i cannot pass the service ssh endpoint dynamically. So i do not know if it fits my need

Share Improve this question asked Feb 25 at 10:53 Spyros TserentzouliasSpyros Tserentzoulias 112 bronze badges 7
  • Please clarify these: 1. “And this piece of pipeline worked as expected” - Do you mean that despite the two errors mentioned above in the pipeline, the service can still be deployed successfully? – Ziyang Liu-MSFT Commented Feb 26 at 5:55
  • 2. "for another service, the 1st job will deploy successfully, but will flag it as failed" - What is another service you are talking about? According to the YAML you provided, it seems that you are deploying the same service. What error did you encounter when deploying another service? If you are using the same pipeline and encounter the same error, then there should not be a run marked as "partially succeeded" and another marked as "failed". – Ziyang Liu-MSFT Commented Feb 26 at 5:55
  • @ZiyangLiu-MSFT 1. Exactly. Even if i got Failed status from the job, the service stack was successfully deployed. 2. Other services have their pipeline yml file with the same flow, copying different files and executing other compose files, to same target vms. So different pipelines with the same flow, for different files and service docker files. The one pipeline partially succeeds and stops after deploying the service at the first available target machine. The second pipeline even though it marks the same job as failed, succeesfully deploys and tries again for every target vm. – Spyros Tserentzoulias Commented Feb 26 at 8:03
  • "The one pipeline partially succeeds and stops after deploying the service at the first available target machine. The second pipeline even though it marks the same job as failed, succeesfully deploys and tries again for every target vm." - In your "Deploy_to_VM2" job, the condition is eq(dependencies.Deploy_to_VM1.result, 'Failed'), which means that only the first job is failed, the second job will run. – Ziyang Liu-MSFT Commented Feb 26 at 9:14
  • According to the current information, there may be two situations. The first is that the errors of deploying different services are different, so some are "failed" and some are "partially succeeded". The second situation is that you used continueOnError: true in the tasks of the first job in some pipelines, so even if there are failed tasks, the job will be marked as "partially succeeded". Although your service was deployed successfully, you should first ensure that the deployment was successful without Docker errors before considering the deployment strategy. – Ziyang Liu-MSFT Commented Feb 26 at 9:20
 |  Show 2 more comments

1 Answer 1

Reset to default 0

The one pipeline partially succeeds and stops after deploying the service at the first available target machine. The second pipeline even though it marks the same job as failed, succeesfully deploys and tries again for every target vm.

In your "Deploy_to_VM2" job, the condition is eq(dependencies.Deploy_to_VM1.result, 'Failed'), which means that only the first job is failed, the second job will run.

According to the current information, there may be two situations. The first is that the errors of deploying different services are different, so some are "failed" and some are "partially succeeded". The second situation is that you used continueOnError: true in the tasks of the first job in some pipelines, so even if there are failed tasks, the job will be marked as "partially succeeded". Although your service was deployed successfully, you should first ensure that the deployment was successful without Docker errors before considering the deployment strategy.

本文标签: