[Interest] Understanding QImage::transformed()

Samuel Rødal srodal at gmail.com
Mon Dec 17 19:19:52 CET 2018


On Mon, Dec 17, 2018 at 7:00 PM Jason H <jhihn at gmx.com> wrote:
>
> > Sent: Monday, December 17, 2018 at 10:06 AM
> > From: "Jason H" <jhihn at gmx.com>
> > To: "Samuel Rødal" <srodal at gmail.com>
> > Cc: interest at lists.qt-project.org
> > Subject: Re: [Interest] Understanding QImage::transformed()
> ...
> >
> > Thanks Samuel, I was confused by this part for transformed(): "The transformation matrix is internally adjusted to compensate for unwanted translation; i.e. the image produced is the smallest image that contains all the transformed points of the original image. Use the trueMatrix() function to retrieve the actual matrix used for transforming an image." Then trueMatrix() says "This function returns the modified matrix, which maps points correctly from the original image into the new image."
> > In which I interpreted the combination of the two as saying "Use trueMatrix() to retrieve the actual matrix for transforming an image without this translation" It seems to me that if I'm doing quad-to-quad, I am intentionally specifying where I want the pixels to end up, rather than having qo query where they ended up after. I believe this is how OpenCV works, with:
> > matrix = cv2.getPerspectiveTransform(pts1, pts2)
> >
> > Anyway, thanks for the insight! I'll give this ago.
>
> So I've played with this a bit, and still no joy. Using the code you provided and my own attempts, never maps those points correctly. Your code looks like you understand what I am trying to do, so I communicated that part effectively at least.
>
> The points I have a labeled as colors, the quadToQuad call uses these points for the toPoly mapping:
> QMap<Qt::GlobalColor, QPoint> toPoints {
>   {Qt::yellow, QPoint (540, 0)},
>   {Qt::blue, QPoint (1080, 540)},
>   {Qt::red, QPoint (540, 1080)},
>   {Qt::green, QPoint (0, 540)
> };
>
> The after image.transformed() it is correctly oriented, but nothing else is correct. The actual points come out to be:
>
> QMap<Qt::GlobalColor, QPoint> resultPoints { // These are estimates via looking at an image in GIMP
>   {Qt::yellow, QPoint (1620, 300)},
>   {Qt::blue, QPoint (2448, 1110)},
>   {Qt::red, QPoint (1638, 1917)},
>   {Qt::green, QPoint (825, 1104)
> };
>
> In theory my out Rect is then (825,300)-(2448,1927), however dx, dy = (1623, 1627) where I expected it to be (1080, 1080). It's 1.50 times bigger in both dimensions than it should be. However even if I have some hypotenuse mistake, that would top out 1t 1.42.
>
> However I can't even get anything remotely close to those points, so I can't even do the math to identify the destination rect for extract and scale-down. (green.x(), yellow.y())-(blue.x(), red.y())
>
> I kind of understand what you tried to do:
> QTransform trueMatrix = QImage::trueMatrix(tx, image.width(), image.height()); // not sure why true matrix and image dimensions are needed. from a linear alg perspective, a transform is a transform, but whatever...

The image dimensions are needed to figure out where the original image
ends up after transformed, in order to know how much
QImage::transformed() needed compensate by translating to ensure the
entire source image still fits in the resulting image. The result of
trueMatrix() is a matrix that also takes this compensating offset into
account so you can use it to map points from the original coordinate
system into the image that QImage::transformed() returns.

> QPoint delta = trueMatrix.map(tx.inverted().map(QPointF(0,0))).toPoint(); // mapping (0,0) thought the inverse of the translation and back through, but this won't produce the same coordinate mapping as on the image, so why?

We want to cut the (0, 0, 1080, 1080) rectangle in the coordinate
space of toPoly, so we need to first map  the (0, 0) coordinate back
to fromPoly's coordinate space (using tx.inverted()) and then into the
transformed image's coordinate space (using trueMatrix), to undo the
extra translation that was done in QImage::transformed().

> I think the code should look like this (but this doesn't work either):
> QImage out = image.transformed(tx, Qt::SmoothTransformation);
>
> QMap<Qt::GlobalColor, QPoint> destinationPoints {  // should map to resultPoints
>   {Qt::yellow, tx.map(toPoints[Qt::yellow])}, // QPoint(640,252), wrong
>   {Qt::blue, tx.map(toPoints[Qt::blue])}, // QPoint(769,540), wrong
>   {Qt::red, tx.map(toPoints[Qt::red])},   //  QPoint(478,674), wrong
>   {Qt::green, tx.map(toPoints[Qt::green]) // QPoint(349,384), wrong
> };
> Or, using the mapping you suggested:
> QMap<Qt::GlobalColor, QPoint> destinationPoints { // // should map to resultPoints
>   {Qt::yellow, trueMatrix.map(tx.inverted().map(colorPoints[Qt::yellow]))}, // QPoint(2234,1452), wrong
>   {Qt::blue, trueMatrix.map(tx.inverted().map(colorPoints[Qt::blue]))}, // QPoint(1954,1050), wrong
>   {Qt::red, trueMatrix.map(tx.inverted().map(colorPoints[Qt::red]))},   //  QPoint(2362,764), wrong
>   {Qt::green, trueMatrix.map(tx.inverted().map(colorPoints[Qt::green]))) // QPoint(2639,1172), wrong
> };
>
> I tried other variations, I never got anything remotely correct.

Here, you should use trueMatrix directly, as it's what maps points
from the original coordinate system to the new image. For example
"trueMatrix.map(colorPoints[Qt::yellow])".

> I'm attaching a sample images.
> The dots are at these locations in the fromImage.jpg:
> "red": [714.667, 13.3333],
> "green": [992, 421.333],
> "blue": [306.667, 298.667],
> "yellow": [586.667, 701.333]
>
> It produces a transform of: QTransform(type=TxProject, 11=-1.53896 12=0.27276 13=-2.5836e-06 21=-0.299256 22=-1.5449 23=-2.44385e-05 31=1652.74 32=923.468 33=1.01866)
>
> They should wind up as points in toPoints, and resemble toImage.jpg (created manually in GIMP)
>
> This "just works" in openCv, but I cannot get it to work in Qt.

It seems the coordinates you provided in fromImage.jpg are wrong. If I
fix the coordinates I get the correct result when using the code I
showed in the earlier mail.

Btw, I think instead of using QImage::transformed() here which is a
bit wasteful as it generates a bigger image containing the entirety of
the source image mapped into the new coordinate system, it's simpler
and better to just use QPainter directly, and set the painter
transform to the result from quadToQuad(). That way you don't even
need to use trueMatrix(). I've shown both approaches in the attached
example, along with some code that maps the source points correctly
into the mapped images.

--
Samuel
-------------- next part --------------
A non-text attachment was scrubbed...
Name: main.cpp
Type: text/x-c++src
Size: 1624 bytes
Desc: not available
URL: <http://lists.qt-project.org/pipermail/interest/attachments/20181217/116ec808/attachment.cpp>


More information about the Interest mailing list