Explain 'salting' next. Salting the hash is effective at blocking rainbow tables..Unless they decide to do something silly like they did with WPA and salt the hash with the ESSID.
Changing the SSID to an uncommon one thwarts that attack.Explain 'salting' next. Salting the hash is effective at blocking rainbow tables..Unless they decide to do something silly like they did with WPA and salt the hash with the ESSID.
My attempt at explaining why it's important to use different and difficult passwords for different systems.A great explanation. You explain very well how sites should implement passwords. The really good sites will internally salt all the password hashing which makes a rainbow table useless unless the hacker knows the details of how the hashes are salted.
http://headsplosive.com/2012/06/password-security-hashes-and-rainbow-tables/ (http://headsplosive.com/2012/06/password-security-hashes-and-rainbow-tables/)
A great explanation. You explain very well how sites should implement passwords. The really good sites will internally salt all the password hashing which makes a rainbow table useless unless the hacker knows the details of how the hashes are salted.
Unfortunately, very many sites do not even understand the need for hashing passwords. Basically no site should ever store any password at all, which means that no secure site can ever email you your password if you forget it - the only thing they can do is to reset it to a new random password. Any site that can email you your password has all the passwords and usernames stored in a database, and when they get hacked (as happened at Sony with their Playstation logins), the hackers get everyone's login details. Almost certainly many people will use the same password on many other sites - like banks, PayPal, GMail, Facebook, etc. People love reusing their favorite passwords.
A system to "salt" or add something to your password to make it different for each site is a really good idea if you need to have passwords you can remember, but that will also be fairly secure. You just have to invent a method for generating a salt from the site name that is not at all obvious.
Say your favourite password is "aaed5ght". For Facebook, you might take the first 4 characters "face", increment each letter by one to make it "gbdf", put them in reverse alphabetical order - "gfdb" and then add this to the start of your favourite password to make it "gfdbaaed5ght" for Facebook only. If someone works out your facebook password, they still will not know any of your passwords for other sites.
I do particularly hate sites that require capitals, numbers, punctuation, etc in passwords as that just gives an illusion of security. Every site that requires this nonsense does not properly understand security. A good password is random and long, and the mix of character types ultimately hardly matters. You can have a very secure password made of just "1"s and "0"s if you like.Forced password rules are always bad. Also having a pass made out of numerals is a guarantee it's in a rainbow table.
One of the problem with requiring numbers and punctuation in a password is that many people use Leets (or L33t) where you say replace an "e" with a "3", an "i" with a "!" , an "L" with a "1", and you end up with a supposedly secure password like "\/\/!11!am" instead of "William". Unfortunately, the hackers know all about Leeting, so it does not add anything to security at all.
Richard.
It is totally secure if someone only has one of your passwords. To get two passwords, they have to hack two different sites that have your plaintext passwords. Then they have to work out how you have altered the passwords (that probably uses a method you make up that can be totally different to the example), and then they have to work out the method you use to generate the seed (which can be way harder then the method I suggested). The seed can be much longer.A system to "salt" or add something to your password to make it different for each site is a really good idea if you need to have passwords you can remember, but that will also be fairly secure. You just have to invent a method for generating a salt from the site name that is not at all obvious.
This is not secure, if 2 password are calculated the rest can be guessed. Salts should be random.
Saying it is not practical with no reason doesn't make any sense to me.QuoteSay your favourite password is "aaed5ght". For Facebook, you might take the first 4 characters "face", increment each letter by one to make it "gbdf", put them in reverse alphabetical order - "gfdb" and then add this to the start of your favourite password to make it "gfdbaaed5ght" for Facebook only. If someone works out your facebook password, they still will not know any of your passwords for other sites.
This is not practical. Best passwords are long passphrases.
All passwords are made out of "0"s and "1"s - didn't you know computers store information in binary? So of course a password made out of "0"'s and "1"s can be as secure as any other password in existence - as long as you have enough "0"s and "1"s.QuoteI do particularly hate sites that require capitals, numbers, punctuation, etc in passwords as that just gives an illusion of security. Every site that requires this nonsense does not properly understand security. A good password is random and long, and the mix of character types ultimately hardly matters. You can have a very secure password made of just "1"s and "0"s if you like.Forced password rules are always bad. Also having a pass made out of numerals is a guarantee it's in a rainbow table.
The point I was trying to make is a bad password (such as a dictionary word) + leeting is far worse then a random character password. Leeting make the user, and the password check algorithm on the site think they are using a secure password, when it is in fact they have a really really bad password. When I have seen leeting used, it is usually done to a word that is a very bad password.QuoteOne of the problem with requiring numbers and punctuation in a password is that many people use Leets (or L33t) where you say replace an "e" with a "3", an "i" with a "!" , an "L" with a "1", and you end up with a supposedly secure password like "\/\/!11!am" instead of "William". Unfortunately, the hackers know all about Leeting, so it does not add anything to security at all.
Leeting adds significant iterations to dictionary attacks. Unfortunately people pay big money for cloud computing just to crack hashes so while a home PC will take many times longer to precompute leeted passes, there are existing, private tables that already have them.
Can someone please explain to me why, in the 21st century we still need "secure" passwords?Security is very well known by the security experts, but the trouble is users always want it easier. They want something else, like your suggestion that the computer should just do the authentification for you using stored information.
It is surely not rocket science for systems to keep password data internally secure, and have time lockouts that prevent any sort of brute-force attack?
Am I just being naive - surely system designers can't all be this incompetent? Or are they just hampered by long-standing insecurities embedded in older systems that are just too hard to change?
How hard can it be...?
For one, the expert know that for security, you cannot have a secured password somehow safe in your computer as that is exactly what the hackers want to see. If you secure the passwords by another password stored in the computer, that is a big vulnerability.Plenty of security experts (eg. Bruce Schneier) recommend password managers. This is the only realistic solution if you expect users to pick unique, secure passwords for the various websites they visit.
You may think "I want to protect against an external brute force attack, but I don't need to protect against someone who gets inside my computer." That is nice, but it is not true security. The moment a security expert is forced to start making arbitrary compromises like that, the security usually starts collapsing into a big insecure mess. True security is where you trust no one except yourself, and that includes all the other companies who write software for your computer, the company that supplies the operating system - basically everyone else.You might be referring to the "reflections on trusting trust" presentation by Ken Thompson. The gist of this talk was that an attack like this would be impossible to detect, not that it's a realistic assumption to expect a system to defend against it. If the OS, compiler or hardware is rigged, you've lost. The hardware or OS might contain all kind of advanced monitoring and phoning home capabilities. Not much you can do about it unless you build everything yourself, starting from a pile of sand (until they invent a way to introduce back doors in sand crystals).
But you still need a password to login to the password manager, and the login can be made secure with multiple factor authentication.For one, the expert know that for security, you cannot have a secured password somehow safe in your computer as that is exactly what the hackers want to see. If you secure the passwords by another password stored in the computer, that is a big vulnerability.Plenty of security experts (eg. Bruce Schneier) recommend password managers. This is the only realistic solution if you expect users to pick unique, secure passwords for the various websites they visit.
You can defend against all of the above. The solutions are all known technology. The problem occurs when people who do not understand security start interfering with the security experts on the basis of the sort of assumptions you have just made.You may think "I want to protect against an external brute force attack, but I don't need to protect against someone who gets inside my computer." That is nice, but it is not true security. The moment a security expert is forced to start making arbitrary compromises like that, the security usually starts collapsing into a big insecure mess. True security is where you trust no one except yourself, and that includes all the other companies who write software for your computer, the company that supplies the operating system - basically everyone else.You might be referring to the "reflections on trusting trust" presentation by Ken Thompson. The gist of this talk was that an attack like this would be impossible to detect, not that it's a realistic assumption to expect a system to defend against it. If the OS, compiler or hardware is rigged, you've lost. The hardware or OS might contain all kind of advanced monitoring and phoning home capabilities. Not much you can do about it unless you build everything yourself, starting from a pile of sand (until they invent a way to introduce back doors in sand crystals).
You can defend against all of the above. The solutions are all known technology. The problem occurs when people who do not understand security start interfering with the security experts on the basis of the sort of assumptions you have just made.How would you defend against the compiler in Ken Thompson's example? I believe there was one theoretical paper published a few years ago which proposed writing a very simple compiler from scratch, but this is hardly a practical solution in most cases.
If no one machine ever had all the information needed to break the login, then even with all the problems you mention above, the login can not be stolen.Maybe not the login, if you use something like challenge response authentication (do you trust the security token manufacturer?). But whatever data you're trying to protect should be considered compromised. If the provided login authentication serves to decrypt certain information (eg. a database with credit card numbers), then the compromised hardware/software can grab this information when it's displayed or transmitted to another system. The information has to exist in unencrypted form at some point. If it just protects against access (eg. standard Windows login passwords), then it's trivial for compromised hardware/software to bypass this check.
I do not know Ken Thompson's work so I do not know the scenarios he was working with. Modern software practices means that the software will include a comprehensive testing suite to check the internal operation of all the routines, and this would include full tests to make sure that the encryption process is producing the correct results. For a compiler to fool the programmer, it would have to be targeted to run correctly for the testing suite, and then run differently for the final operational code. This to me sounds like a very targeted attack on a new compilation of a known an existing program. It could happen for a commercial program where everything may be compiled on one computer. It would be very hard to accomplish in open source where the programs are compiled on different compiler versions on different platforms under different operating systems. One compromised system just would not work properly with other non-compromised systems using the correct encryption with the correct secures private keys.You can defend against all of the above. The solutions are all known technology. The problem occurs when people who do not understand security start interfering with the security experts on the basis of the sort of assumptions you have just made.How would you defend against the compiler in Ken Thompson's example? I believe there was one theoretical paper published a few years ago which proposed writing a very simple compiler from scratch, but this is hardly a practical solution in most cases.
If no one machine ever had all the information needed to break the login, then even with all the problems you mention above, the login can not be stolen.Maybe not the login, if you use something like challenge response authentication (do you trust the security token manufacturer?). But whatever data you're trying to protect should be considered compromised. If the provided login authentication serves to decrypt certain information (eg. a database with credit card numbers), then the compromised hardware/software can grab this information when it's displayed or transmitted to another system. The information has to exist in unencrypted form at some point. If it just protects against access (eg. standard Windows login passwords), then it's trivial for compromised hardware/software to bypass this check.
Can someone please explain to me why, in the 21st century we still need "secure" passwords?
It is surely not rocket science for systems to keep password data internally secure, and have time lockouts that prevent any sort of brute-force attack?
Am I just being naive - surely system designers can't all be this incompetent? Or are they just hampered by long-standing insecurities embedded in older systems that are just too hard to change?
How hard can it be...?
For one, the expert know that for security, you cannot have a secured password somehow safe in your computer as that is exactly what the hackers want to see. If you secure the passwords by another password stored in the computer, that is a big vulnerability.Plenty of security experts (eg. Bruce Schneier) recommend password managers. This is the only realistic solution if you expect users to pick unique, secure passwords for the various websites they visit.
Mathematically it is all the same though, no? If its a secure password in one site, surely its is also a secure one at another (assuming the same minimum length and allowable characters).The point of unique passwords is to limit the damage of a leaked password. Imagine the hypothetical scenario where your encrypted LinkedIn password was published and some idiot at LinkedIn faild to implement proper password hashing. An attacker may be able to figure out many of the passwords used by LinkedIn users. They could also have the e-mail addresses belonging to these passwords. They now can use this same password to get access to your Gmail or Hotmail account (just search for addresses ending in gmail.com/hotmail.com), which provides them with access to many other sites through 'recover your password' procedures. They could also use the same password and e-mail address to log into Paypal and perform transactions on your behalf. Hence the advice to LinkedIn users to change their password on LinkedIn and any other site where they might have used the same password. Having different passwords for all these sites would limit the damage to just your LinkedIn account.
If people understand where the strength of passwords come from, they would demand unlimited length passwords and the ability to use all Unicode characters. Then all they would need is one passstring.
I do not know Ken Thompson's work so I do not know the scenarios he was working with. Modern software practices means that the software will include a comprehensive testing suite to check the internal operation of all the routines, and this would include full tests to make sure that the encryption process is producing the correct results. For a compiler to fool the programmer, it would have to be targeted to run correctly for the testing suite, and then run differently for the final operational code. This to me sounds like a very targeted attack on a new compilation of a known an existing program. It could happen for a commercial program where everything may be compiled on one computer. It would be very hard to accomplish in open source where the programs are compiled on different compiler versions on different platforms under different operating systems. One compromised system just would not work properly with other non-compromised systems using the correct encryption with the correct secures private keys.I believe the premise was that the compiler would detect if it was compiling the login program, and insert code similar to:
if ((username == 'root') && (password == 'toor')) {
uid = 0;
grant_access();
}
Of course it could something much more intricate than a backdoor password, like something requiring exact timing. Short of inspecting disassembly, I don't see how any automated test suite would find this. Spreading it would be an issue, although even most GCC users get their compiler from one of the popular Linux distro's (which have dedicated build farms) or from Cygwin. It shows that life would be very hard if you're unable to trust your compiler (and by extension everything the compiler depends on). One might also imagine a system management mode (invisible to the OS) process embedded in firmware that would detect GCC running and monkey patch it to do something similar.