[Interest] Issue with QSqlModel and QSortFilterModel

Konstantin Shegunov kshegunov at gmail.com
Thu Apr 18 11:10:08 CEST 2019


On Thu, Apr 18, 2019 at 12:41 AM Scott Bloom <scott at towel42.com> wrote:

> Any thoughts?
>

Hi!
As a matter of fact I was facing a similar issue for my current project,
but in my case I need to fetch the data over the network, from a DB as
well, but one that I don't have direct access to. You can imagine I
struggle(d) as you with how to orchestrate the whole thing. Unfortunately,
what I concluded is that the model framework is designed in such a way that
it's not possible out of the box to run high-latency operations. To give an
example - take QAbstractItemModel::canFetchMore and
QAbstractItemModel::fetchMore, on the one hand it's well intentioned to
allow the underlying data layer to fetch data incrementally, the problem is
the assumption(s) that are made, namely that fetchMore is going to return
when the data is available (i.e. populate the data synchronously),
otherwise you just stick to the default implementation, which does nothing.
There's no fetchMore and a fetchedMore counterpart and so on ... Note,
also, that the model is often not well suited to hold the actual data,
except in the simplest of cases.

What I ended up doing in the end is to "invent" a real data layer, to which
my derived model (think QAbstractItemModel implementation) is a proxy. The
data source signals the changes and the model just (re)translates the
(internal) identifiers into QModelIndex instances for the view. This comes
with its own battery of problems and is quite a lot of work, but it seems
to work well. Bear in mind that when I start-up the model and the data
source, I stream (in manageable chunks) all the data for the model to the
client, and then handle the changes as unsolicited notifications from the
network peer (i.e. the server). As far as my research into the issue goes,
depending on the internal framework for incremental loading is futile. The
model-view framework doesn't care about partial datasets, it only cares
that the children data for a given parent index is loaded; it has no notion
of paging the data.
What was somewhat challenging was to batch-up the inserts/updates/deletions
that are signaled from the data source, because the model expects them to
be grouped by parent, which naturally isn't always the case; in the end
there is some boilerplate to organize that as well. As for the filtering
part, currently I have a couple of sort-filter proxies, but as work goes
along I'm planing to wrap up my own filter proxy that has multiple filter
fields. For large(-ish) datasets the multiple model index remappings eat a
lot of time and memory.

I hope that helps you a bit

Kind regards,
Konstantin.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.qt-project.org/pipermail/interest/attachments/20190418/305ff50f/attachment.html>


More information about the Interest mailing list