[Development] Default enabling inbound flow control on sockets

shane.kearns at accenture.com shane.kearns at accenture.com
Mon Feb 13 14:28:27 CET 2012


You've correctly understood the problem, but the scale is less bad (takes a couple of minutes to exhaust 2GB limit of a 32 bit process using a loopback connection)

Changing the default would mean behavior is the same as if you called setReadBufferSize(65536) in the application. Currently the default is setReadBufferSize(0), where 0 has the special meaning of unlimited.

Performance tests should not break, because they are reading as fast as possible. There may be a degradation on one side if the writer is faster than the reader.

UDP sockets losing data if buffering is exceeded is acceptable (it's supposed to be an unreliable service)
I hadn't thought too much about UDP at this point, just TCP.

The readAll issue is with QNetworkReply rather than QTcpSocket.
There are two programming models in use for this:

Simple model:
1. send request
2. wait for the finished signal from the reply
3. call readAll to get the data

Good model:
1. send request
2. each time the reply emits bytesAvailable
2a. call readAll to get the currently buffered data
2b. save / process currently buffered data
3. when finished signal is received, call readAll a final time to get the tail

The simple model tends to be used where the expected data is small.
The good model tends to be used where the data size is unknown (e.g. browsers, download managers)

We could give a much larger buffer size for QNetworkReply (e.g. 1MB) to catch reasonable small downloads, but this might be harder for developers (as they only see the buffer limit rarely)

Yes, QIODevice allows partial writes, but Q*Socket always accepts all the data in the current implementation.
I actually think this side is less of a problem, because the developer knows what kind of data they will be sending.
Therefore they are in the best position to decide if they need write flow control or not
-----Original Message-----
From: andrew.stanley-jones at nokia.com [mailto:andrew.stanley-jones at nokia.com] 
Sent: 11 February 2012 11:36
To: Kearns, Shane
Subject: Re: [Development] Default enabling inbound flow control on sockets


Ok, let me verify I understand the problem:

1. If a Qt app is not processing the data, Sockets continue reading no mater what.

2. This results in no flow control, and on a 10gigE connection results in the Qt app potentially receiving ~1 gigabyte/sec stream of data. (assuming the HW/SW etc can handle it)  

3. Qt app exhausts all memory within a couple of seconds.

This seems like relatively brain dead behaviour.  On large systems it's a problem, and on smaller devices outright dangerous.

What will changing the default buffer size do?  I assume you mean set a reasonable default to QAbstractSocket::readBufferSize().  This seems sensible, it will stop the stream and use appropriate flow control, if available with that protocol.

1. Break performance tests.

2. datagram (UDP) sockets will lose data.

3. As you said, readAll() may have problems, but it's also incompatible with sane network programming.  How can I trust the other side is only going to send 50 bytes and not 4 gigs?

There's no requirement for write buffering as well.  Under Unix's can you can write data to a socket and have it not block and/or have it write less data than available.  QIODevice accounts for this and write can return 0 bytes written. You can easily argue most programs don't handle this well.  But at some point network links can't handle traffic at any arbitrary rate and flow control must kick in.  Buffering is a short term solution.

I think it's a good idea to have a build in limit.  I think it avoids easily made mistake by developers.

-Andrew

On 10/02/2012, at 8:25 PM, ext shane.kearns at accenture.com wrote:

> The current default behaviour of Qt sockets is that they allocate an unbounded amount of memory if the application is not reading all data from the socket but the event loop is running.
> In the worst case, this causes memory allocation failure resulting in a crash or app exit due to bad_alloc exception
> 
> We could change the default read buffer size from 0 (unbounded) to a sensible default value e.g. 64k (to be benchmarked)
> Applications wanting the old behaviour could explicitly call setReadBufferSize(0)
> Applications that already set a read buffer size will be unchanged
> 
> The same change would be required at the QNetworkAccessManager level, as there is no point applying flow control at the socket and having unbounded memory allocations in the QNetworkReply.
> 
> This is a behavioural change that would certainly cause regressions.
> e.g. any application that waits for the QNetworkReply::finished() signal and then calls readAll() would break if the size of the object being downloaded is larger than the buffer size.
> 
> On the other hand, we can't enable default outbound flow control in sockets (we don't want write to block).
> Applications need to use the bytesWritten signal.
> QNetworkAccessManager already implements outbound flow control for uploads.
> 
> Is making applications safe against this kind of overflow by default worth the compatibility breaks?
> 
> 
> ________________________________
> Subject to local law, communications with Accenture and its affiliates including telephone calls and emails (including content), may be monitored by our systems for the purposes of security and the assessment of internal compliance with Accenture policy.
> ______________________________________________________________________________________
> 
> www.accenture.com
> 
> _______________________________________________
> Development mailing list
> Development at qt-project.org
> http://lists.qt-project.org/mailman/listinfo/development






More information about the Development mailing list