[Interest] Best practices for Qt code base benchmarks?
Ivan Solovev
ivan.solovev at qt.io
Tue Oct 1 09:47:34 CEST 2024
Hi Stan,
> Would metrics generated by the benchmark be documented in the code review, as comments within the tst_*.cpp file itself, or in a README.txt file within the benchmark folder?
I normally include the benchmark results in the commit message.
That makes it easier for the reviewers + leaves some info in the git log.
Also, if you're rewriting some pre-existing function, it might make sense to
add the benchmark as a prequel commit, before actually doing the refactoring.
Best regards,
Ivan
________________________________________
From: Interest <interest-bounces at qt-project.org> on behalf of Stan Morris <pixelgrease at gmail.com>
Sent: Monday, September 30, 2024 10:36 PM
To: interest at qt-project.org
Subject: [Interest] Best practices for Qt code base benchmarks?
I want to add a benchmark that compares a legacy Qt function's performance against one I am developing as a patch. I cannot find guidance on best practices for benchmarks.
Is there documentation regarding best practices for benchmarks within the Qt framework?
Are "<module>/tests/benchmarks/..." intended to be run from with Qt Creator?
Is there a convention for documenting the goals of benchmarks?
It appears to me that the "tests/benchmarks" folder is for Qt code base developers for researching the performance during development, but where is the explanation for interpreting results?
For example, consider: "/qtdeclarative/tests/benchmarks/quick/events/". Are the results for a specific platform recorded anywhere? What do the results *mean*?
I'm testing on two platforms, a desktop and an embedded device and getting results that show the patch can improve performance. Would metrics generated by the benchmark be documented in the code review, as comments within the tst_*.cpp file itself, or in a README.txt file within the benchmark folder?
More information about the Interest
mailing list