# [Interest] Understanding QImage::transformed()

Jason H jhihn at gmx.com
Mon Dec 17 16:06:45 CET 2018

```
> Sent: Saturday, December 15, 2018 at 10:32 AM
> From: "Samuel RĂ¸dal" <srodal at gmail.com>
> To: "Jason H" <jhihn at gmx.com>
> Cc: interest at lists.qt-project.org
> Subject: Re: [Interest] Understanding QImage::transformed()
>
> Accidentally replied off-list, replying again here.
>
> On Fri, Dec 14, 2018 at 11:56 PM Jason H <jhihn at gmx.com> wrote:
> >
> > I have an image. I have identified 4 control points in in image (1920x1080).
> > I want to map the points to a square image, of 1080 on a side.
> >
> > QPolygonF fromPoly(QVector<QPointF> { ... });
> > QPolygonF toPoly(QVector<QPointF> {QPoint(squareDimension/2, 0), QPoint(squareDimension, squareDimension/2), QPoint(squareDimension/2, squareDimension), QPoint(0, squareDimension/2)});
> >
> > Where the toPoly maps to [(960,0), (1080,960), (960,1080), [0, 960)]
>
> Hmm? Surely toPoly is something like:
>
> QPolygonF toPoly(QVector<QPointF> { QPointF(540, 0), QPointF(1080,
> 540), QPointF(540, 1080), QPointF(0, 540) });
>
> How can squareDimension == 1080 and squareDimension/2 == 960?

That is an error, that's half-width, not half-height. I sent the message, realized the error, but the post didn't show up by the time I had to leave so I could reply to my own thread rather than start a new one. You are correct, 540 is the right number.

> > QTransform tx;
> >         out = image.transformed(QImage::trueMatrix(tx, image.width(), image.height()));
> >         qDebug() << out.save("sdsd1.jpg");
> >         out = out.copy ((out.width() - squareDimension)/2,(out.height() - squareDimension)/2, squareDimension, squareDimension);
> >         qDebug() << out.save("sdsd2.jpg");
> > }
> >
> > But out is (3270x2179);
> > The control points match up a square in that out image of 1600x1600, which is not right. No matter what I do, using trueMatrix() or not, when I crop the image to 1080x1080, it is too zoomed in.
> >
> > What do I need to go to get all the control points to fit into an image of 1080x1080? Imagine an image (1920x1080) of a clock with some perspective skew. I identify 12,3, 6, and 9 hour positions. I want to create an image of the clock without perspective skew, that is to say, a face-on approximation of the clock.
>
> The result of QImage::trueMatrix() isn't meant to be passed to
> QImage::transformed(), since QImage::transformed() will _always_ do
> the compensation to ensure all the points of the original image are
> included in the transformed image. So using tx or trueMatrix(tx) will
> produce the same result. Instead, you want to use QImage::trueMatrix()
> to figure out where the (0, 0) origin in the toPoly's coordinate
> system gets mapped, in order to cut the (0, 0, 1080, 1080) rectangle
> from the transformed image:
>
>     QTransform tx;
>        QTransform trueMatrix = QImage::trueMatrix(tx, image.width(),
> image.height());
>
>        QPoint delta = trueMatrix.map(tx.inverted().map(QPointF(0,
> 0))).toPoint();
>        QImage out = image.transformed(tx,
> Qt::SmoothTransformation).copy(delta.x(), delta.y(), 1080, 1080);
>
>      out.save("out.jpg");
>    }
>
> We map (0, 0) in the toPoly's coordinate system to the fromPoly and
> source image's coordinate system by using the inverse of tx, and then
> we map this point to the target image by using the result of
> trueMatrix().
>
> trueMatrix docs: "This function returns the modified matrix, which
> maps points correctly from the original image into the new image."
>

Thanks Samuel, I was confused by this part for transformed(): "The transformation matrix is internally adjusted to compensate for unwanted translation; i.e. the image produced is the smallest image that contains all the transformed points of the original image. Use the trueMatrix() function to retrieve the actual matrix used for transforming an image." Then trueMatrix() says "This function returns the modified matrix, which maps points correctly from the original image into the new image."
In which I interpreted the combination of the two as saying "Use trueMatrix() to retrieve the actual matrix for transforming an image without this translation" It seems to me that if I'm doing quad-to-quad, I am intentionally specifying where I want the pixels to end up, rather than having qo query where they ended up after. I believe this is how OpenCV works, with:
matrix = cv2.getPerspectiveTransform(pts1, pts2)

Anyway, thanks for the insight! I'll give this ago.

```