[Interest] TCP ACK with QDataStream -- OR: disconnect race condition detection

Till Oliver Knoll till.oliver.knoll at gmail.com
Mon Sep 10 14:53:31 CEST 2012

Am 10.09.2012 um 13:36 schrieb Thiago Macieira <thiago.macieira at intel.com>:

> On segunda-feira, 10 de setembro de 2012 13.33.24, Till Oliver Knoll wrote:
>> Am 10.09.2012 um 13:19 schrieb Thiago Macieira <thiago.macieira at intel.com>:
>>> ...
>>> It's a layering violation for the Application layer to depend on how the
>>> Transport layer works. As I pointed out, the Transport layer may deliver
>>> the bytes and even ACK them, but the Application layer above may never
>>> receive them, for a variety of reasons.
>> I was just thinking for a moment that "knowing the number of ACK'ed bytes"
>> /could/ be useful in some cased, as in "implementing a minimal reliable
>> message protocol" (based entirely on the "TCP reliability".
> If TCP is reliable by doing ACKs, why the hell would you need to track ACKs 
> too?

Because, since that information is around anyway, why not use it (apart from that it seems not accessible in real world implementations)?

As in "I want to sent a message of 1000 bytes. Unless the counterparty's Transport layer doesn't acknowledge the receipt of 1000 bytes, I assume something went wrong (and take action by re-sending the message after some while in an "at least once" scenario)".

That is, the application *wants* to know the number of bytes received by the counterparty, but doesn't want to apply an Application level protocol.

For instance, the scenario could be that the "network" is the *only* concern to the application, or in other words: once the counterparty's Transport layer has ACK'ed the packets, you know (or rather: expect) that "all will be fine" (and if not, the counterparty just blew up and the connection will be closed and you'll know the next time you try to send a message).

So I might turn around the question: "Why would you NOT want to track (or rather: get informed about) the ACK which is done by the underlying layer?"

Depending on your concrete use case that might just be the information you need to make your "Non-Application-Level" protocol a little bit more reliable.

And coming back to the OP's actual question: currently you can't figure out whether the last n bytes did get through or not. Either you check the connection *before* you start sending (in which case the connection might just break the moment you start sending), or you check *after* (some while) whether the connection is "still good" (in which case you still don't know for sure wheter all, some or non at all of your bytes have been sent and ACK'ed by the counterparty).

> It's redundant: if you get the ACK information, that's because TCP got it 
> too;

... and your application knows that at least the counterparty's Transport layer has received the data. If that's all you care about then you gave gained "information".

> if you don't get it, neither did TCP.

But you can react upon it (after some timeout), whereas if you don't get this information at all you're left in the dark whether to re-send the last n bytes.

That's why one might want to track those ACKs.

> Therefore, TCP has already got all 
> the information it needs in order to be reliable.
> You can't add more reliability with the same information.

Now I said already the above was just based upon a "first thought" why one would want to know the information about the ACK'ed bytes, and by all means we already agreed that in most cases that would be useless, because it would not make transmission more reliable (yes, you *do* need an Application level protocol in such cases).

However have a look at the following scenario below:

>> Even for a simple task such as "Resuming an interrupted download" ...
> Exactly.

Assume you're now on some device with a dead slow network connection and you want to upload some data. Additionally you want to show some progress bar how many bytes have already been received (hint! hint!) by the counterparty. Let's further assume the data fits fully into the send buffer (which is on its turn much bigger than the packet size).

Let's have a look at a first naive implementation: you would use QIODevice::bytesWritten to update the progress bar and -bang!- you're from 0 to 100% in no time! Because we just filled the sender buffer, but maybe did not even send a single file onto the wire just yet!

Now if we *had* the information about the *received bytes* (note again: here it is totally irrelevant what the receiver does with those bytes!) we could of course update the progress bar in a much more useful manner.

Now you could still argue that this should be solved by an application level protocol, e.g. the receiver should send back the number of bytes it has processed or whatever. But a) the receiver's code might not be ours (we cannot modify it) and b) I might again turn around Thiago's question: "Why should the application duplicate what the underlying level is doing anyway?"

And all of a sudden it *seems* that the information about the received bytes is not *that* useless, after all, is it?

(By the way, that progress bar with a slow connection was exactly the initial motivation from the previous Stack Overflow question I referred to in my previous post).


More information about the Interest mailing list