admin管理员组文章数量:1278947
We need to deploy to a cluster of VMs that are a docker swarm on prem. We use azure pipelines as CI/CD tool. Until we deploy to main, everything works as a charm since we use one VM for every env except from main, that uses multiple VMs. The flow is, copy over ssh necessary files to the VM, then run an ssh task to the destination vm executing the deploy commands need. So the part of the pipeline that deploys for main is like that:
- job: Deploy_to_VM1
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
dependsOn:
- Download_Artifact_and_Version
variables:
version: $[ dependencies.Download_Artifact_and_Version.outputs['setVersionVar.var'] ]
displayName: 'Deploy to VM1'
steps:
- task: CopyFilesOverSSH@0
displayName: 'Upload service files to vm1'
inputs:
sshEndpoint: 'sshvm1'
sourceFolder: '$(System.DefaultWorkingDirectory)/docker-compose'
contents: |
config.yml
docker-compose.yml
targetFolder: '/tmp'
cleanTargetFolder: false
overwrite: true
readyTimeout: '20000'
- task: SSH@0
inputs:
sshEndpoint: 'sshvm1'
runOptions: 'inline'
inline: |
export version=$(version)
echo "$(REGISTRY_PASSWORD)" | docker login -u $(REGISTRY_USER) --password-stdin $(REGISTRY_URL)
docker stack deploy -c /tmp/docker-compose.yml service
condition: succeeded()
name: deploy_vm1
- job: Deploy_to_VM2
dependsOn:
- Download_Artifact_and_Version
- Deploy_to_VM1
condition: eq(dependencies.Deploy_to_VM1.result, 'Failed')
variables:
version: $[ dependencies.Download_Artifact_and_Version.outputs['setVersionVar.var'] ]
displayName: 'Deploy to VM2'
steps:
- task: CopyFilesOverSSH@0
displayName: 'Upload service files to vm2'
inputs:
sshEndpoint: 'sshvm2'
sourceFolder: '$(System.DefaultWorkingDirectory)/docker-compose'
contents: |
config.yml
docker-compose.yml
targetFolder: '/tmp'
cleanTargetFolder: false
overwrite: true
readyTimeout: '20000'
- task: SSH@0
inputs:
sshEndpoint: 'sshvm2'
runOptions: 'inline'
inline: |
export version=$(version)
echo "$(REGISTRY_PASSWORD)" | docker login -u $(REGISTRY_USER) --password-stdin $(REGISTRY_URL)
docker stack deploy -c /tmp/docker-compose.yml service
condition: succeeded()
name: deploy_vm2
This piece of pipeline always has as an output 2 errors: ##[error]WARNING! Your password will be stored unencrypted in /***/.docker/config.json.
##[error]Since --detach=false was not specified, tasks will be created in the background. In a future release, --detach=false will become the default.
But the pipelines run successfully. Issue is that we changed vms now. And this piece of pipeline worked as expected and deployed on the first available VM for one of our services with status "partially succeeded" and for another service, the 1st job will deploy successfully, but will flag it as failed and it will continue to the 2nd vm, will deploy successfullyagain, mark it as failed, and same goes for every job in the list of VMs.
Does anybody have any insight on what is happening or has a better deployment strategy for a cluster of VMs on prem using ssh connections?
I was also looking at the strategy job that takes as input a list of inputs giving dynamic the ssh service connection but first issue was i don't know how to stop when the first one succeeds and the second issue that i cannot pass the service ssh endpoint dynamically. So i do not know if it fits my need
We need to deploy to a cluster of VMs that are a docker swarm on prem. We use azure pipelines as CI/CD tool. Until we deploy to main, everything works as a charm since we use one VM for every env except from main, that uses multiple VMs. The flow is, copy over ssh necessary files to the VM, then run an ssh task to the destination vm executing the deploy commands need. So the part of the pipeline that deploys for main is like that:
- job: Deploy_to_VM1
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
dependsOn:
- Download_Artifact_and_Version
variables:
version: $[ dependencies.Download_Artifact_and_Version.outputs['setVersionVar.var'] ]
displayName: 'Deploy to VM1'
steps:
- task: CopyFilesOverSSH@0
displayName: 'Upload service files to vm1'
inputs:
sshEndpoint: 'sshvm1'
sourceFolder: '$(System.DefaultWorkingDirectory)/docker-compose'
contents: |
config.yml
docker-compose.yml
targetFolder: '/tmp'
cleanTargetFolder: false
overwrite: true
readyTimeout: '20000'
- task: SSH@0
inputs:
sshEndpoint: 'sshvm1'
runOptions: 'inline'
inline: |
export version=$(version)
echo "$(REGISTRY_PASSWORD)" | docker login -u $(REGISTRY_USER) --password-stdin $(REGISTRY_URL)
docker stack deploy -c /tmp/docker-compose.yml service
condition: succeeded()
name: deploy_vm1
- job: Deploy_to_VM2
dependsOn:
- Download_Artifact_and_Version
- Deploy_to_VM1
condition: eq(dependencies.Deploy_to_VM1.result, 'Failed')
variables:
version: $[ dependencies.Download_Artifact_and_Version.outputs['setVersionVar.var'] ]
displayName: 'Deploy to VM2'
steps:
- task: CopyFilesOverSSH@0
displayName: 'Upload service files to vm2'
inputs:
sshEndpoint: 'sshvm2'
sourceFolder: '$(System.DefaultWorkingDirectory)/docker-compose'
contents: |
config.yml
docker-compose.yml
targetFolder: '/tmp'
cleanTargetFolder: false
overwrite: true
readyTimeout: '20000'
- task: SSH@0
inputs:
sshEndpoint: 'sshvm2'
runOptions: 'inline'
inline: |
export version=$(version)
echo "$(REGISTRY_PASSWORD)" | docker login -u $(REGISTRY_USER) --password-stdin $(REGISTRY_URL)
docker stack deploy -c /tmp/docker-compose.yml service
condition: succeeded()
name: deploy_vm2
This piece of pipeline always has as an output 2 errors: ##[error]WARNING! Your password will be stored unencrypted in /***/.docker/config.json.
##[error]Since --detach=false was not specified, tasks will be created in the background. In a future release, --detach=false will become the default.
But the pipelines run successfully. Issue is that we changed vms now. And this piece of pipeline worked as expected and deployed on the first available VM for one of our services with status "partially succeeded" and for another service, the 1st job will deploy successfully, but will flag it as failed and it will continue to the 2nd vm, will deploy successfullyagain, mark it as failed, and same goes for every job in the list of VMs.
Does anybody have any insight on what is happening or has a better deployment strategy for a cluster of VMs on prem using ssh connections?
I was also looking at the strategy job that takes as input a list of inputs giving dynamic the ssh service connection but first issue was i don't know how to stop when the first one succeeds and the second issue that i cannot pass the service ssh endpoint dynamically. So i do not know if it fits my need
Share Improve this question asked Feb 25 at 10:53 Spyros TserentzouliasSpyros Tserentzoulias 112 bronze badges 7 | Show 2 more comments1 Answer
Reset to default 0The one pipeline partially succeeds and stops after deploying the service at the first available target machine. The second pipeline even though it marks the same job as failed, succeesfully deploys and tries again for every target vm.
In your "Deploy_to_VM2" job, the condition is eq(dependencies.Deploy_to_VM1.result, 'Failed'), which means that only the first job is failed, the second job will run.
According to the current information, there may be two situations. The first is that the errors of deploying different services are different, so some are "failed" and some are "partially succeeded". The second situation is that you used continueOnError: true
in the tasks of the first job in some pipelines, so even if there are failed tasks, the job will be marked as "partially succeeded". Although your service was deployed successfully, you should first ensure that the deployment was successful without Docker errors before considering the deployment strategy.
本文标签:
版权声明:本文标题:deployment - How to use deploy to first available VM in a cluster of VMs via azure pipelines and ssh tasks - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1741209194a2358764.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
eq(dependencies.Deploy_to_VM1.result, 'Failed')
, which means that only the first job is failed, the second job will run. – Ziyang Liu-MSFT Commented Feb 26 at 9:14continueOnError: true
in the tasks of the first job in some pipelines, so even if there are failed tasks, the job will be marked as "partially succeeded". Although your service was deployed successfully, you should first ensure that the deployment was successful without Docker errors before considering the deployment strategy. – Ziyang Liu-MSFT Commented Feb 26 at 9:20