admin管理员组

文章数量:1353240

I’m running a TensorFlow UNet model on geospatial imagery. I split the image into overlapping 500×500 pixel tiles using a custom function (get_tiles) and use a stride of 460 (i.e. 500 – 2×padding) so that tiles overlap. I then run predictions on these tiles, and to standardize their size I use: prediction_padded = tf.image.resize_with_crop_or_pad(prediction, resolution, resolution)

Then, I merge the predicted tiles with rasterio.merge.merge using the original geospatial transform for each tile. But, the final mosaic appears as a checkerboard -- the tiles are misaligned/I don't think they are georeferencing correctly. (It is possible that something is wrong with my input images, but that hasn't been the case so far, and the tiles themselves look like they are identifying real data)

I’ve checked on both of these:

  • The tile extraction loop prints window offsets and sizes (most are 500×500, with edge tiles being smaller, which is expected).
  • The prediction tiles (after resizing) all have shape (500, 500).

What could be causing this checkerboard effect? Is the forced resizing misaligning the data relative to the geospatial transform? Or should I be modifying the window stride, using only a valid region of the tile, or avoiding padding on edge tiles?

本文标签: geospatialCheckerboard Mosaic When Merging Prediction Tiles from TensorFlow UNetStack Overflow