admin管理员组文章数量:1353240
I’m running a TensorFlow UNet model on geospatial imagery. I split the image into overlapping 500×500 pixel tiles using a custom function (get_tiles) and use a stride of 460 (i.e. 500 – 2×padding) so that tiles overlap. I then run predictions on these tiles, and to standardize their size I use: prediction_padded = tf.image.resize_with_crop_or_pad(prediction, resolution, resolution)
Then, I merge the predicted tiles with rasterio.merge.merge
using the original geospatial transform for each tile. But, the final mosaic appears as a checkerboard -- the tiles are misaligned/I don't think they are georeferencing correctly. (It is possible that something is wrong with my input images, but that hasn't been the case so far, and the tiles themselves look like they are identifying real data)
I’ve checked on both of these:
- The tile extraction loop prints window offsets and sizes (most are 500×500, with edge tiles being smaller, which is expected).
- The prediction tiles (after resizing) all have shape (500, 500).
What could be causing this checkerboard effect? Is the forced resizing misaligning the data relative to the geospatial transform? Or should I be modifying the window stride, using only a valid region of the tile, or avoiding padding on edge tiles?
本文标签: geospatialCheckerboard Mosaic When Merging Prediction Tiles from TensorFlow UNetStack Overflow
版权声明:本文标题:geospatial - Checkerboard Mosaic When Merging Prediction Tiles from TensorFlow UNet - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1743913817a2560864.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论