[Development] Question about QCoreApplicationData::*_libpaths

Bubke Marco Marco.Bubke at theqtcompany.com
Sun Jan 24 21:41:08 CET 2016


On January 24, 2016 21:11:18 Hausmann Simon <Simon.Hausmann at theqtcompany.com> wrote:

> Hi,
>
> Could you elaborate where you see copy on write causing writes to shared cache lines? Are you concerned about the shared cache line for the reference count?
>
> For reading MESI allows for shared cache lines and for hyper threads the shared l1 data cache mode favors sharing and thus CoW.

I was speaking about distinct caches and the latency you introduce if you are invalidate a cache line. It really depends on your out of order implementation but so far I understand atomics are still much slower as if you simply not share at all. But like I said it is better to measure it. 

But my general question is why do we use CoW,  how often it helps, how often it hurts. What are other techniques? How can we help with tools,  so the we fix it much earlier and not at run time. Could we find out with some kind of profiling the cases where sharing would be good and add fixits for it? 
E.g.

You returned this really big member in the test run. We can change it to a shared pointer or a CoW container.  Do you want it?  Yes or no? 

Actually I think many mistakes like unneeded copies could be hinted by the code model too. 

> What am I missing to understand your statement?
>
>
> Simon
>
>   Original Message
> From: Bubke Marco
> Sent: Sunday, January 24, 2016 19:10
> To: Kevin Kofler; development at qt-project.org
> Subject: Re: [Development] Question about QCoreApplicationData::*_libpaths
>
>
> On January 24, 2016 17:45:36 Kevin Kofler <kevin.kofler at chello.at> wrote:
>
>> Marc Mutz wrote:
>>> (numThread == 2, same box)
>>>
>>> Copying is still not significantly slower than ref-counting, even for 4K
>>> elements.
>>
>> But it is already slower with as little as 32 elements, and stops being
>> significantly faster already at 16 elements.
>>
>> And now try with numThread == 1 for some extra fun. :-) A lot of code out
>> there is still single-threaded.
>>
>
>  Yes but in the future the processors getting more and more parallel. If I am working on the bigger dataset with parallel algorithms I don't want to share writes to the same cache like. Something which CoW is providing.
>
>>         Kevin Kofler
>>
>> _______________________________________________
>> Development mailing list
>> Development at qt-project.org
>> http://lists.qt-project.org/mailman/listinfo/development
>
> --
> Sent from cellphone, sorry for the typos
> _______________________________________________
> Development mailing list
> Development at qt-project.org
> http://lists.qt-project.org/mailman/listinfo/development

--
Sent from cellphone, sorry for the typos



More information about the Development mailing list