Monday, July 19, 2021

Why Passwords Should Be Only Words

I never liked miXED cAsE passwords, and lately I don't like all the e×tr@ characters in passwords. I finally put my finger on why -- they are both simply less efficient sources of entropy.

Here are some examples of passwords, what sources they come from, and how much entropy I would assign to each of them.

  1. ~V{.^e2AQ= - randomly generated from 96 characters, basically what shows on your keyboard. Entropy is 6.58 bits per character, so this 10 character value would have about 66 bits of entropy.
  2. WDLHYLTXKZ - from 26 letters. Entropy is 4.7 bits per letter, so this 10 character value would have 47 bits of entropy.
  3. 3756206184 - from numerals. Entropy is 3.32 bits per numeral, so this 10 digit value would have 33 bits of entropy.
  4. correct horse battery staple - "randomly selected" (not really) from common words. I'll take Randall Munroe's word on this (pretty sure he did more research than me), and assign 11 bits of entropy per word, so this has 44 bits of entropy. (this means Munroe's "dictionary" had about 2000 words in it)

To align this more fairly, here are passwords with a similar amount of entropy:

  1. $.0h>_6@)]p)za% - 15 characters
  2. KUVIFPOJYZWBUTLBYBWN - 21 letters
  3. 362276725051989790476913717218 - 30 digits
  4. park appear internal tale glorious nation vary anxiety access - 9 words
    • For this I used a list of over 2000 common words, from 2 to 12 characters each, and selected 9 at random.

Which of these seems most efficient?

Let us first address the character length / memory size, which is a red herring. The number of bytes should not be considered a factor. Computers are generally not storing the actual password anyway, just a cryptographically-secure hash value of it, which is fixed size. In the edge-cases where the full text is stored, there is no trouble for computers to store 1000 characters instead of 10.

Another red herring is false-entropy. Most exemplified in L33t-speak, humans get a false sense of entropy by using a very limited alphabet, while mixing a few extra characters into them. So intead of "password", you use "p@ssw0rd". This gives an intuitive feeling that it has the entropy of a full 96-character random word (53 bits of entropy), but it is actually a simple dictionary word (17 bits of entropy) with a small permutation (+4 bits).

So what is the more natural means of quantifying efficiency? I only know of two:

  1. How hard is it to memorize, per amount of entropy?
  2. How long does it take you to input into a computer?

For effort to memorize, I believe full words takes the prize hands-down. For effort to input into a computer, I think it is more complicated. Today, using a smart-phone, with a "password" field that hides what I am typing, I would probably rather input the 21 letters than the 9 words. But on a keyboard, or using voice-recognition input, or even just a text field on my phone that lets me use a swipe-keyboard and see what I'm typing, I would much prefer the 9 words.

This is a gut feeling, I haven't run any performance tests. I hope someone does a real research project on this. But I do have one simple argument, against mixed-case and non-numeric characters -- they require multiple keystrokes. Every time you change from uppercase to lowercase or vice versa, you have to move at least one other finger. If you're moving that finger, would you rather add 1 bit of entropy to that character by extending the alphabet of possible characters, or 4 bits by simply adding a letter? I vote for the latter.

One final consideration, which is an edge case, but some old ("legacy") computers put very strict limits on the length of your password. This is rapidly becoming a thing of the past, but in systems where your password can only be 8/12/16 characters, then using an actually random set of characters, from the widest selection set possible, is a significant advantage.