[Interest] QFile subclass that does HFS compression?
René J. V. Bertin
rjvbertin at gmail.com
Thu Dec 20 15:03:45 CET 2018
Thiago Macieira wrote:
> Almost all the compression libraries support streaming mode. Some will have a
I'm not aware of any libraries that apply HFS compression to a stream, but above
all, I'm not aware of any way to write HFS resource forks incrementally. I never
tinkered with them at this level before, but nowadays they're apparently just
another extended attribute set via setxattr(). So I don't see any other approach
but to keep the entire resource fork in memory so you can overwrite the entire
fork (attribute) each time one or more new compressed chunks are to be added.
Even if you could append to an existing resource fork there's the fact that the
block or chunk table or however you wish to call it comes before the actual
chunks themselves. Evidently that table grows if you don't know the final file
size at the start. It's understandable that this table sits at a known location
at the start of the compressed data but it does get in the way.
I guess that could explain why no one wrote a streaming HFS compression library,
and I understand better now why afsctool used a rather blunt implementation
before I started optimising it a bit (1 buffer to hold the entire original file,
and one buffer large enough for the worse-case compressed result plus the
corresponding chunk table).
> pull mechanism, where it will ask you for a number of bytes; others will have
> a way for you to detect the next block's size so it can be decoded. Usually,
> memory usage in the decoder is not that big, but it might rise to a megabyte
> or two for large, well-compressed files.
When did this become about decoding (decompressing)?
R.
More information about the Interest
mailing list