admin管理员组文章数量:1405750
I'm currently using Docker to build and run multiple different versions of an application (GROMACS, if that's important). This application requires CUDA and a large number of libraries, resulting in each individual image nearing 30GB (according to Docker). It seems to me that this is potentially far too large.
It is my understanding that because Docker uses overlayfs, images that share a base image will only take up as much extra space as is required by the layers in which they differ. I have tried to use a base layer to take advantage of this, but I'm not certain I'm doing it right. Here's the core problem:
Different versions of the application require different versions of the CUDA toolkit, which of course all take up more space. I'm not sure if the most optimal thing to do is to install each CUDA toolkit in the base container and leave them all there, or to delete the ones that are not needed in each image.
How can I optimize the space that these containers take up?
Of course, I'm aware that this may be an X-Y problem, so I'm also open to critique of my approach.
I have tried installing CUDA in the base container, and deleting the unneeded version in each of the final containers. I was expecting that this would help to decrease the size of the images, but this isn't what I saw. After some research, this seems unsurprising as regardless of whether CUDA is present in the upper layers, it must still be present in the lower layers of the image (is that correct?).
In support of this, I have noticed that the actual space taken up by files in the container is about half of what Docker claims, perhaps due to the discrepancy between the base layer and the final image.
Thanks!
本文标签:
版权声明:本文标题:storage - How can I optimize the space used by multiple similar Docker containers with different versions of the CUDA toolkit? - 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1744880756a2630192.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论