[Interest] Depth-first filtering for QAbstractProxyModel

Thompson, Adam B. thompsonab at ornl.gov
Thu Sep 8 15:25:46 CEST 2016


André,

I've only been working with Qt for the last couple of years or so and haven't had any formal training, so anything I'm doing is based on my interpretation of their documentation or examples. The contents of the model itself aren't very complex, there just happens to be the potential to hold a large number of nodes the user needs to sift through to take actions (open plots, text editors, etc.). That said, I don't have enough experience to agree or disagree with your assessment.

I've tried a caching scheme, but it doesn't seem to matter much. It's certainly possible my caching isn't working properly, so I'm not ruling that out as an issue. A custom storage back-end and QAbstractItemModel subclass would be doable, it would just take some time to change things around from how they are right now. That still seems to be going against their own suggestions in their documentation since the filtering would be done on the model itself instead of a proxy.

I really want to do this the right way so it scales well with the amount of data it stores. The problem is having the time/funding to implement it since we have other higher priority tasks and the tree is currently functional, just not performant with larger models.

Thanks,
Adam

From: Interest [mailto:interest-bounces+thompsonab=ornl.gov at qt-project.org] On Behalf Of André Somers
Sent: Thursday, September 8, 2016 3:32 AM
To: interest at qt-project.org
Subject: Re: [Interest] Depth-first filtering for QAbstractProxyModel




Op 07/09/2016 om 18:03 schreef Thompson, Adam B.:
André,

I'm just using a QStandardItemModel as the source model and a subclass of QSortFilterProxyModel for the QTreeView mdoel. It seemed simple enough to use QStandardItemModel for the model instead of a custom data structure exposed via a QAbstractItemModel subclass since I don't need anything too complex in terms of storage, etc.
Seeing what you write afterwards, I think I disagree with that assessment. But, that could also be my bias against the QSIM class and the Q*Widget view classes. I think these are fine for toy applications or very small models, but not for trees with thousands of nodes and fast depth-first filtering capabilities.


My understanding is I should be using some subclass of QAbstractProxyModel to modify the presentation of the underlying (source) model instead of having special logic in the model itself. At least, that's based on my interpretation of the Qt documentation.
Well, that's one way. But I think I would consider ditching QSIM and creating your own data store with a QAIM-derived model on top. You can design that store to the requirements you actually have, such as quick depth-first filtering. That is certainly going to be much faster than relying on a generic solution.

If you're not prepared to go down that route, I think I'd let the proxy or the source model build up some kind of index to speed up filtering. That would be easier to maintain if the model is fairly static rather than changing all the time, but that's information you don't give. You could do something like this if the data in the tree is not or only seldom is going to change:

Build up a single vector of items in your model with the piece of data you need to search on. That list is going to be in the order of the tree, depth first. Now, let each node in your tree keep the indices of the first and last item of the subtree for that node, so the index for the text of the node itself and the index of the last descendent of the node. You will see that every node contains a sub-range of the range of its parent node. Now when you search, you do a linear search over the list to find all matching items, ending up with a set of indices into that list. The visible nodes in your tree are now those where there are indices that fall into the range you stored when building up your index, which is a cheap, non-recursive test, especially since you do not need to check the whole list of indices for every node and the gathering of matching indices can be parallelized. Downside is: an insert is going to be very expensive as you'd basicaly would need to adjust all nodes that follow in the depth-first order. In your case, you could store the indices into the model itself.


André
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.qt-project.org/pipermail/interest/attachments/20160908/861f2116/attachment.html>


More information about the Interest mailing list