admin管理员组

文章数量:1417070

I'm trying to get rid of OpenCV in my image pipeline. I'm replacing it with PIL. I understand that the affine transformation in OpenCV is source -> destination, but the parameter for PIL is destination -> source. The transformation matrix used in OpenCV can be inverted to be used in PIL.

The problem I have is there's one image that I can successfully affine transform in OpenCV, but not PIL. The result I get from PIL is upside down and translated. I've tried inverting the OpenCV transformation matrix and applying it to the PIL ".transform()" method, but I get the same results. What am I doing wrong here?

...
warped = cv2.warpAffine(image, M, (width, height))
...
a_f = M[0, 0]
b_f = M[0, 1]
c_f = M[0, 2]
d_f = M[1, 0]
e_f = M[1, 1]
f_f = M[1, 2]

a_i, b_i, c_i, d_i, e_i, f_i = invert_affine(a_f, b_f, c_f, d_f, e_f, f_f)
warped = image.transform(
    ...
    (a_i, b_i, c_i, d_i, e_i, f_i),
    ...
)

I was asked for a reproducible example. I've put the image on a CDN. I have a script on GitHub.

I'm trying to get rid of OpenCV in my image pipeline. I'm replacing it with PIL. I understand that the affine transformation in OpenCV is source -> destination, but the parameter for PIL is destination -> source. The transformation matrix used in OpenCV can be inverted to be used in PIL.

The problem I have is there's one image that I can successfully affine transform in OpenCV, but not PIL. The result I get from PIL is upside down and translated. I've tried inverting the OpenCV transformation matrix and applying it to the PIL ".transform()" method, but I get the same results. What am I doing wrong here?

...
warped = cv2.warpAffine(image, M, (width, height))
...
a_f = M[0, 0]
b_f = M[0, 1]
c_f = M[0, 2]
d_f = M[1, 0]
e_f = M[1, 1]
f_f = M[1, 2]

a_i, b_i, c_i, d_i, e_i, f_i = invert_affine(a_f, b_f, c_f, d_f, e_f, f_f)
warped = image.transform(
    ...
    (a_i, b_i, c_i, d_i, e_i, f_i),
    ...
)

I was asked for a reproducible example. I've put the image on a CDN. I have a script on GitHub.

Share Improve this question edited Feb 4 at 16:29 Christoph Rackwitz 15.9k5 gold badges39 silver badges51 bronze badges asked Feb 2 at 20:47 Tyler NorlundTyler Norlund 4686 silver badges19 bronze badges 4
  • Post the original input image, the Opencv transformation matrix and your processed transformation matrix. Are you not supposed to give the OpenCV invertAffineTransform() a matrix as an array, not individual values? Sorry, I have never used it. But the docs seem to indicate a matrix, which I presume is an array, like M in warpAffine. What is invert_affine? Is that PIL? Why don't you use OpenCV invertAffineTransform()? Then get the individual matrix elements from that and use in PIL. – fmw42 Commented Feb 2 at 23:33
  • Or see geeksfeeks./how-to-inverse-a-matrix-using-numpy. You should put [] around each 3 elements indicating rows of matrix. – fmw42 Commented Feb 2 at 23:44
  • 2 minimal reproducible example required. -- never assume that the four corners of a rectangle around something are in the order or orientation that you hope. the result being upside down can very well come from such assumptions. – Christoph Rackwitz Commented Feb 3 at 0:04
  • Minimal reproducible example – Tyler Norlund Commented Feb 3 at 3:03
Add a comment  | 

1 Answer 1

Reset to default 2

This was a problem with the EXIF orientation tag.

Fixed it with this:

# Open the image using PIL
image = Image.open(local_file)
from PIL import ImageOps
image = ImageOps.exif_transpose(image)

‘’’

本文标签: pythonPIL vs OpenCV Affine Transform why does the image turn upside downStack Overflow