Kerckhoffs' Principle: Difference between revisions
imported>Sandy Harris No edit summary |
imported>Sandy Harris |
||
Line 48: | Line 48: | ||
==Implications for analysis== | ==Implications for analysis== | ||
Any serious enemy — one with strong motives and plentiful resources — ''will'' learn all the other details. In war, the enemy will capture some of your equipment and some of your people, and will use spies. If your method involves software, enemies will do memory dumps, run it under the control of a debugger, and so on. If it is hardware, they will buy or steal some and build whatever programs or gadgets they need to test them, or dismantle them and look at chip details with microscopes. Or in any of these cases, they may bribe, blackmail or threaten your staff or your customers. One way or another, sooner or later they ''will'' know exactly how it all works. | |||
From the defender's point of view, using secure cryptography is supposed to reduce a difficult problem — keeping messages secure — to a much more manageable one — keeping relatively small keys secure. A system that requires long-term secrecy for something large and complex — the whole design of a cryptographic system — obviously cannot achieve that goal. It only replaces one hard problem with another. | |||
Because of this, any competent person asked to analyse a system will first ask for all the internal details. An enemy will have them, so the analyst should if the analysis is to make sense. | Because of this, any competent person asked to analyse a system will first ask for all the internal details. An enemy will have them, so the analyst should if the analysis is to make sense. | ||
Revision as of 22:03, 28 February 2010
In Auguste Kerckhoffs' [1] 1883 book, La Cryptographie Militaire [2], he stated six axioms of cryptography. Some are no longer relevant given the ability of computers to perform complex encryption, but the second is the most critical, and, perhaps, counterintuitive.
“ | Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi. | ” |
“ | The method must not need to be kept secret, and having it fall into the enemy's hands should not cause problems. | ” |
Another English formulation [3] is:
“ | If the method of encipherment becomes known to one's adversary, this should not prevent one from continuing to use the cipher as long as the key remains unknown | ” |
The same principle is sometimes called "Shannon's Maxim" after Claude Shannon who formulated it as:
“ | The enemy knows the system. | ” |
A Cold War formulation was: [4]
“ | A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin. | ” |
That is, the security should depend only on the secrecy of the key.
Is your system secure when the enemy knows everything except the key? If not, then at some point it is certain to become worthless. Since a security analyst cannot know when that point might come, the analysis can be simplified to The system is insecure if it cannot withstand an attacker that knows all its internal details.
Implications for analysis
Any serious enemy — one with strong motives and plentiful resources — will learn all the other details. In war, the enemy will capture some of your equipment and some of your people, and will use spies. If your method involves software, enemies will do memory dumps, run it under the control of a debugger, and so on. If it is hardware, they will buy or steal some and build whatever programs or gadgets they need to test them, or dismantle them and look at chip details with microscopes. Or in any of these cases, they may bribe, blackmail or threaten your staff or your customers. One way or another, sooner or later they will know exactly how it all works.
From the defender's point of view, using secure cryptography is supposed to reduce a difficult problem — keeping messages secure — to a much more manageable one — keeping relatively small keys secure. A system that requires long-term secrecy for something large and complex — the whole design of a cryptographic system — obviously cannot achieve that goal. It only replaces one hard problem with another.
Because of this, any competent person asked to analyse a system will first ask for all the internal details. An enemy will have them, so the analyst should if the analysis is to make sense.
Cryptographers will therefore generally dismiss out-of-hand any security claims made for any system whose internal details are kept secret. Without analysis, no system should be trusted. Without details, it cannot be properly analysed. If you want your system trusted — or even just taken seriously — the first step is to publish all the internal details. Of course, there are some exceptions; if a major national intelligence agency claims that one of their secret systems is secure, the claim will be taken seriously because they have their own cipher-cracking experts. However, no-one else making such a claim is likely to be believed.
That is, "security by obscurity" does not work. Anyone who claims something is secure (except perhaps in the very short term) because its internals are secret is either clueless or lying, perhaps both. Such claims are one of the common indicators of cryptographic snake oil.
References
- ↑ Kahn, David (second edition, 1996), The Codebreakers: the story of secret writing, Scribners p.235
- ↑ Peticolas, Fabien, la cryptographie militaire
- ↑ Savard, John J. G., The Ideal Cipher, A Cryptographic Compendium
- ↑ Bellovin, Steve (June, 2009), Security through obscurity, Risks Digest