-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix the Key Replacement Attack #240
Comments
We'll have to either (1) Decide we don't care about this, which I'm very skeptical of doing, or (2) Add some sort of domain separation parameter to the encryption (which would also let us provide a "key names" feature see #241, and could be independently useful to users of our library who are building complex protocols). Edit: Don't conflate "adding internal domain separation" (which would fix this attack) with "adding domain separation and using it internally as well as exposing it to the user" (which would fix this attack but complicate the API). We can do the former without the latter, so make the decision independently. |
Incidentally, NaCl exposed the nonce so that it could take on values that were meaningful to the protocol it was being used in. If we exposed the nonce then we too could use it for domain separation here. I've always thought NaCl would have been better off not exposing the nonce and instead adding a "domain separation" parameter, but DJB didn't do that for efficiency reasons (I think). Maybe we should implement that in this library. I think if we encoded it into the HKDF info strings we'd have almost the same performance. But we need to be careful about what properties we want it to have. |
Here's a pattern security auditors can memorize that will help them detect such vulnerabilities in other things: If you're encrypting messages and keys that will be used to encrypt other messages, make sure "encrypting a key" and "encrypting a message" are cryptographically different things. (Generalizing this: If you have multiple semantically distinct message spaces (e.g. messages, keys, ciphertexts -- for double-encryption or whatever) then make sure the encryption operations for each space are domain-separated). |
We could "wontfix" this by adding this to the documentation:
If the secret was a key, and people had to reuse keys, I'd be fine with that. But these are passwords which imply they're chosen by the user. The developer of some application that encrypts files with passwords has no control over whether some other application developer makes some thing that protects keys with passwords and some of the users re-use passwords either by chance or because they're lazy. This isn't the only attack that's possible because of this. You can a decryption success oracle for password-based string decryption to check if a protected key was encrypted with the same password, and vice-versa. I need to think more. |
I don't think we should add a domain separation parameter because that would be too much of a change and we need to get v2.0.0 reviewed and out soon. How about this: Inside
|
I did that, it's in my PR for documentation. |
I shouldn't have just hashed it, I should have made it something like |
I'm just curious why - is there a chance that user prepares $password that it makes sha256 to fail or produce weak input for PBKDF? |
I found a protocol interaction attack (aka a chosen-protocol attack)!
Consider the following two scenarios.
Scenario A: A web server is configured with a password P1. That web server uses this library's
Crypto::encryptWithPassword()
to encrypt arbitrary user-uploaded strings and returns the ciphertext to the user.Scenario B: A web server is configured with a password P2. That web server allows the user to upload a serialization of a
KeyProtectedByPassword
. It attempts to unlock theKeyProtectedByPassword
with P2 and if successful encrypts its bitcoin wallet private keys with the resultingKey
and broadcasts the ciphertext to the world.These are obviously contrived, but bear with me.
Individually, both of these scenarios are secure (but useless) on their own. In Scenario A the server is just a chosen plaintext oracle, only the server with P1 could decrypt the ciphertexts it creates. In Scenario B, what the server is doing should be fine, since the IND-CCA2 property guarantees that it shouldn't be practical to find a valid
KeyProtectedByPassword
for P2 and so its bitcoins are safe.However, if Scenario A and Scenario B are happening at the same time, and P1 = P2 then the combined system is not secure.
Let P = P1 = P2. The attacker can first compute the
saveToAsciiSafeString()
string representation of the all-zero-byte key. They provide that string to the server in Scenario A, obtaining the encryption of it under P. Now they can provide that ciphertext to the server in Scenario B, which will unlock it to the all-zero-byte-key using P and broadcast their bitcoin private keys encrypted under the all-zero-byte-key, which the attacker can easily decrypt.(Edit to clarify: The two servers can trust each other, so they can use the same password. The server in Scenario B might trust the server in Scenario A not to steal its bitcoins. The point is, the attacker can do this without ever learning the password and without ever requiring either of the servers to behave maliciously.)
In other words, if the same password is re-used in two different protocols, one providing the attacker with a chosen-plaintext oracle for password encryption, we lose the integrity guarantees for any
KeyProtectedByPassword
that are protected with the same password.I found this bug while writing documentation. I was making this change to unlockKey when I realized it was possible: f6a839e
Why did this vulnerability get introduced? Because when I added
KeyProtectedByPassword
I did not explicitly realize I was adding a second protocol which could interact with the user's application's protocol.Is this really a vulnerability? If the user wants to use this library with the same secret in two different protocols then it is their responsibility to ensure there are no interactions between them. However, it would not have been possible to tell that this kind of interaction exists from our API and documentation alone. This kind of interaction could have been avoided in principle, and the user reading our API and docs might have assumed it was. So it is a vulnerability.
Bonus points if you can come up with more-realistic and less-contrived examples for Scenario A and Scenario B.
The text was updated successfully, but these errors were encountered: