[Interest] TLS/SSL XML encryption security

Thiago Macieira thiago.macieira at intel.com
Wed Oct 9 08:40:02 CEST 2019


On Tuesday, 8 October 2019 09:26:19 PDT Roland Hughes wrote:
> > That DOES work with keys produced by OpenSSL that was affected by the
> > Debian bug you described. That's because the bug caused the problem space
> > to be extremely restricted. You said 32768 (2^15) possibilities.
> 
> Unless the key range 2^15 has been physically blocked from the
> generation algorithm, the database created for that still works ~ 100%
> of the time when the random key falls in that range. The percentage
> would depend on how many Salts were used for generation or them having
> created the unicorn, a perfectly functioning desalinization routine.

Sure, but that database containing 2^15 entries is no better than any other 
database with 2^15 entries generated randomly. The chances of getting a hit 
are just as infinitesimal as with the original table, except for software 
still using the broken OpenSSL version.

> > A non-broken random generator will produce 2^128  possibilities in 128
> > bits. You CANNOT compare fast enough
> 
> Does not matter because has nothing to do with how this works. Not the
> best, not the worst, just a set it and forget it automated kind of
> thing. It's taking roughly 8 bytes out of the packet and doing a keyed
> hit on the database. If found great! If not, it slides the window down
> one byte and performs a new 8 byte keyed hit.

First of all, you don't understand how modern cryptography works. AES, for 
example, is usually encrypted in blocks of 16 bytes. You can't slide down one 
byte. You can only slide down by block granularity. 

Second, it seems you don't understand how modern cryptography works. The 
latter blocks depend on the state from previous ones. So even if you knew the 
key being used, unless you had the entire traffic from the beginning, you 
couldn't decode it. A random match in the middle of a transmission won't get 
you a decode, it has to be at a known state on both sides.

Third, it seems you don't understand how fast modern computers are (or, 
rather, how fast they *aren't*). You CANNOT scan a meaningful fraction of the 
2^128 space within the current lifetime of the universe, with computers that 
we have today or are likely to have in the next decade.

The only way your attack works is against 15-year-old ciphers, things like 
3DES or RC4. There's a reason they are deprecated and disabled in all modern 
OpenSSL versions. There may be people out there running old versions and not 
caring or not knowing that they are insecure. Security requires paying 
attention to security disclosures and keeping your software up-to-date.

> > So it can happen. But the chance that it does happen and that the captured
> > packet contains critical information is infinitesimal.
> 
> When you are targeting a DNS address which has the sole purpose of
> providing CC authorization requests and responding to them, 100% of the
> packets contain critical information. Even the denials are important
> because you want to store that information in a different database. If
> you ever compromise any of those cards, sell them on the Dark Web cheap
> because they are unreliable.

Ok, I will grant you that if you choose your victim well, the chances that the 
intercepted packet contains critical information is big.

That doesn't mean you can decode it. The chance of a random match is still 
infinitesimal.

> > Crackers don't attack the strongest part of the TLS model, which is the
> > encryption. They attack the people and the side-channels.
> 
> Kids do.

Yeah, because they have no clue what they're doing. They have no hope of 
cracking proper security this way.

> Nah, this isn't a lease storage space type of attack. If they are well
> funded and willing to risk their own computer room with a rack they will
> get one or more of these.

I just used the public AWS numbers as a benchmark, since they are well-known 
and I could do math with them. I have no clue how much it costs to run a DC 
for 2 exabytes of storage. Just the power bill will be huge.

> They will start out with an HP or other SFF desktop and a 6+TB drive.

An 8 TB drive is 2^43 bytes. That means it can store 2^39 16-byte entries, 
assuming no overhead. We're talking about a 2^128 problem space: that's 2^89 
times bigger. 618,970,019,642,690,137,449,562,112 times bigger.

Even one trillionth of that is still 618,970,019,642,690.1 times bigger.

> 56TB - ~9lbs
> https://www.bhphotovideo.com/c/product/1466481-REG/owc_other_world_computing
> _owctb2sre56_0s_56tb_thunderbay_4_raid.html/specs

4 kg / 56 TB  = 71.4 picograms/byte. That's actually pretty good.

> In order to plan against attack one has to profile just who the attacker
> is and what motivates them. For the large "corporate" organizations, you
> are correct. They are looking to score a billion dollars per week and
> aren't interested in a slow walk. The patient individuals who aren't
> looking to "get rich quick" are much more difficult to defend against.
> The really stupid ones get caught right away. The really smart ones take
> a slow path like this.

And they get nowhere.

> > Have you ever heard of Claude Shannon?
> 
> Nope.

Maybe you should read up on the Father of Information Theory. Seems kind of 
relevant for your profession.

> > Anyway, you can't get more data into storage than there are possible
> > states of matter. As far as our*physics*  knows, you could maybe store a
> > byte per electron. That would weigh 5 billion tons to store 16 * 2^128 
> > bytes.
>
> The same physics, when incorrectly applied "prove" bumblebees cannot fly?

It doesn't matter whether the physics is right or not. What matters is what we 
know it to be. All storage devices store and retrieve exactly as much data as 
they are designed to store. You can't design a denser one until you understand 
how to make a denser storage.

Or put another way: before there's a device that can store more data than my 
calculations, there has to be a breakthrough in physics that would explain how 
that can happen. From the breakthrough in physics to a practical, affordable 
storage device there will be sufficient time to deal with the concern and 
modify the cryptography algorithms to resist.

That's what's happening today with Post-Quantum Cryptography.

> If there is a ToD sensitivity in the random generator, shouldn't be, but
> on this Debian system looks like there might be, then one can
> dramatically reduce the DB size needed and reduce the target range to
> all traffic within a window.

That's a side-channel attack: exploiting a flaw in the random generator to 
reduce the problem space. That's exactly what I've been arguing: attacks 
aren't done against the strongest part of the system, they are done against 
the flaws.

If you tell me that there are vulnerabilities known to only a few people in 
the deepest Dark Web, the NSA and maybe one or two more state actors, who can 
exploit that vulnerability and decrypt ciphertext that is affected, I'll 
believe you. I don't doubt there's an operation somewhere running a dedicated 
DC to exploit such flaws.

But the more people know about the flaw, the greater the risk it'll become 
public knowledge, then get fixed. So no script kiddie is going to be 
decrypting packets any time soon.

> > I don't doubt that there are hackers that have dedicated DCs to cracking
> > credit card processor traffic they may have managed to intercept. But they
> > are not doing that by attacking the encryption.
> 
> Some are and some aren't. The fact so many deny the possibility is the
> reason.

I repeat: no one is attacking by brute-forcing the strongest part of the 
encryption.

Want me to repeat again?

-- 
Thiago Macieira - thiago.macieira (AT) intel.com
  Software Architect - Intel System Software Products





More information about the Interest mailing list