[Interest] TLS/SSL XML encryption security

Matthew Woehlke mwoehlke.floss at gmail.com
Mon Oct 7 22:46:33 CEST 2019


On 04/10/2019 20.17, Roland Hughes wrote:
> On 10/3/19 5:00 AM, Matthew Woehlke wrote:
>> On 01/10/2019 20.47, Roland Hughes wrote:
>>> To really secure transmitted data, you cannot use an open standard which
>>> has readily identifiable fields. Companies needing great security are
>>> moving to proprietary record layouts containing binary data. Not a
>>> "classic" record layout with contiguous fields, but a scattered layout
>>> placing single field bytes all over the place. For the "free text"
>>> portions like name and address not only in reverse byte order, but
>>> performing a translate under mask first. Object Oriented languages have
>>> a bit of trouble operating in this world but older 3GLs where one can
>>> have multiple record types/structures mapped to a single buffer (think a
>>> union of packed structures in C) can process this data rather quickly.
>>
>> How is this not just "security through obscurity"? That's almost
>> universally regarded as equivalent to "no security at all". If you're
>> going to claim that this is suddenly not the case, you'd best have
>> some *really* impressive evidence to back it up. Put differently, how
>> is this different from just throwing another layer of
>> encry^Wenciphering on your data and calling it a day? 
>
> _ALL_ electronic encryption is security by obscurity.
> 
> Take a moment and let that sink in because it is fact.
> 
> Your "secrecy" is the key+algorithm combination. When that secret is
> learned you are no longer secure. People lull themselves into a false
> sense of security regurgitating another Urban Legend.

Well... sure, if you want to get pedantic. However, as I see it, there
are two key differences:

- "Encryption" tries to make it computationally hard to decode a message.

- "Encryption" (ideally) uses a different key for each user, if not each
message, such that compromising one message doesn't compromise the
entire protocol. (Okay, granted this isn't really true for SSL/TLS
unless you are also using client certificates.)

...and anyway, I think you are undermining your own argument; if it's
easy to break "strong encryption", wouldn't it be much *easier* to break
what amounts to a basic scramble cipher?

> One of the very nice things about today's dark world is that most are
> script-kiddies. If they firmly believe they have correctly decrypted
> your TLS/SSL packet yet still see garbage, they assume another layer of
> encryption. They haven't been in IT long enough to know anything about
> data striping or ICM (Insert Character under Mask).

So... again, you're proposing that replacing a "hard" (or not, according
to you) problem with an *easier* problem will improve security?

I suppose it might *in the short term*. In the longer term, that seems
like a losing strategy.

> He came up with a set of test cases and sure enough, this system which
> worked fine with simple XML, JSON, email and text files started
> producing corrupted data at the far end with the edge cases.

Well, I would certainly be concerned about an encryption algorithm that
is unable to reproduce its input. That sounds like a recipe guaranteed
to eventually corrupt someone's data.

> Even if all of that stuff has been fixed, you have to be absolutely
> certain the encryption method you choose doesn't leave its own tell-tale
> fingerprint. Some used to have visible oddities in the output when they
> encrypted groups of contiguous spaces, nulls, etc. Plus, there are quite
> a few places like these showing up on-line.

Again, though, it seems like there ought to be ways to mitigate this. If
I can test for successful decryption without decrypting the *entire*
message, that is clear grounds for improvement.

-- 
Matthew


More information about the Interest mailing list