Savage’s Theorem

April 14, 2008

I don’t remember where or when, now, but but a few years ago I came across a piece of advice from a respected security expert that ran something like this: “If you treat your users like criminals, they will invariably prove you right.” Even though I don’t remember who said it or where (I think it may have been an article somewhere by Rob Fickenger), it’s stuck with me, because there are a several important ideas packed into it.

The first has to do with administrator mindset: The network exists for the users, and you should be protecting it for them, not from them. If your users are in your threat model, the problem is probably you, not them (of course, we’re talking about sysadmins, not webmasters of public sites, here). If you’re suspicious and go looking for trouble, though, you’ll probably find it. We’ve all worked with admins like that–and at some point, some of us have probably fallen into the trap of being admins like that–so most of us can recognize why that attitude isn’t productive.

The second idea is potentially transformative: policy and attitude influence user behaviors as much as they respond to them. Part of it has to do with the path of least resistance. If the policy makes it difficult for regular users to do their jobs because of fear that some users will abuse their privileges, then even normal users will start looking for ways to circumvent the system. This is why the RIAA approach to copyright fails so miserably. But part of it also has to do with fostering an general spirit of trust, and with the way technocultural knowledge is disseminated. Users look to policy to establish norms. If the policy implies that most users are devious hackers attempting to subvert the system to their own uses, then that is what users will assume they should be. If, on the other hand, the cues point toward a norm of responsible use, the majority of users will pick on that, too.

This is why CYA is a horrible guiding principle for any organization, and why one of the worst things policy makers can do is write policy for corner cases. There will always be bad apples, but write the policy for the general case–for how to use the system, not for how not to use the system–and deal with the exceptions as exceptions.

This insight, of course, has a much wider application than computing systems. It applies in almost any social setting. It is closely related, for instance, to the problems we see throughout the academy with “helicopter parents” and the resurrection of in loco parentis on campus: if you treat students like they’re not adults, they’ll never start to act like adults.

We talk about people “rising to the challenge,” but we never stop to realize that the reverse is also true. Thus, Savage’s Theorem:

People will generally meet your expectations of them.


Identity and hypocrisy

September 20, 2007

I relaized today that I’m a hypocrite.

On the one hand, I’m a big proponent of OpenID. I think that tying identity to individuals, rather than services, makes sense and is the only sensible way to handle id management on the internet.

That doesn’t mesh well, though, with my general security policy and open derision for people who use the same password for everything. OpenID is essentially using the same password for everything, or at least it’s the same single point of failure security model. I guess I’ll have to lay off the single passworders.

People who use short, all-lower-case, dictionary words are still firmly in my sights, though.

Constitution 2.0

April 2, 2007

Ed Foster should leave Info World and do political satire full time:


You, the people of the United States of America (herein referred to as “You”), in order to form a more perfect union with your Government (herein referred to as “Government” or “We”), do agree to be bound by the terms of this Constitution. If you do not agree to the terms of this Constitution, do not use any Government services, including but not limited to justice, domestic tranquility, the common defense, the general welfare, the blessings of liberty for you and your posterity, and/or residence in the United States of America.

Article I. You agree that all legislative, executive, judicial, and other powers shall be vested in the Government. The times, places, and manner of selecting Government officials will be determined by the Government. We may at any time make or alter such regulations, rules, or laws and shall have the power to appoint or remove officials as We deem appropriate.

Article II. You agree that your access to services We provide may be terminated immediately without notice on the sole and absolute discretion of the Government if you fail to comply with any term or provision of this Constitution. Upon termination, you must immediately cease to make use of all Government services including but not limited to life, liberty, and the pursuit of happiness. You agree that treason, copyright infringement, and other high crimes and misdemeanors shall be punishable by death, bill of attainder for corruption of blood, and other penalties as the Government may direct. []

What’s great about Ed characterization, here, is that it points up exactly how it is that we let this happen. I know a lot of commentators drag out that tired old saw from Franklin about people who sacrifice liberty for temporary security deserving neither, and I don’t entirely disagree. But I think Foster’s closer to the truth, here. It’s not about being scared, or actually believing that the current government’s policies actually foster security. I think even the most vocal elements of the Right have finally admitted that this administration’s policies have systematically made Americans less secure, both at home and abroad. The real problem is that we’ve become so used to signing away our rights without even reading the fine print that it’s ceased to bother us.


February 6, 2007

img_olb_sitekey_vd0005_0340×0147.jpgRyan over at 27b/6 has an article up today about SiteKey, and the fact that it doesn’t really do anything. Actually, it’s a link to an NYT piece where one of the researchers concludes that “Sometimes the appearance of security is more important than security itself,” and the the reason Bank of America was willing to pay so much for SiteKey is RSA’s data showing that it overwhelmingly makes customers feel more secure…despite the fact that 58 out of 60 of them ignore it entirely. If you don’t know what SiteKey is, here’s the quick version: when you set up your account, you choose from a bewildering selection of pre-chosen icons that have nothing to do with you and nop significance for you personally. Then when you log into the site, the picture gets shown next to the password box. If you see your picture, you go ahead and enter your password. If you see a different picture, something is wrong. The picture storage and retrieval is supposedly implemented in a way that makes it difficult to fake.

I have a BoA account (unfortunately my credit card company was purchased by them last year), and I can tell you that SiteKey is completely useless. It’s not surprising to me that 58 out of 60 people got it wrong. It’s surprising that 2 out of 60 didn’t. The problem is that it’s difficult to know whether the picture shown is my picture or not. I log on once a month to pay my bill, and that’s it. I don’t spend the other 29 (or 30 or 27 or 28) days of the month constantly reminding myself which 50×50 icon I chose for my account. So when, a month later, I sign in again, the best I can do is say “well, it looks like something I would have chosen.” But then again, 99% of the pictures look like something I might have chosen. None of the pictures are disturbing, nauseating, or even macabre. They’re uniformly pretty and unexceptional. Sailboats? Sure. Venice? why not. A sunset? Who wouldn’t? I’m pretty sure it’s not the teddy bear, but beyond that I couldn’t tell you if the picture next to my name is the one I picked, just another one of the seemingly hundreds I had to flip though, or one that isn’t even a BoA picture at all.

This, as Ryan says, is why phishing attacks work.

More disturbing is the information from the study that 100% of the study participants logged in even when sent to a non-SSL page. Again, that doesn’t suprise me. Users are faced with a bewildering array of visual cues about a site’s security, and an even more bewildering preponderance of sites that don’t properly support major browsers. If everyone refused to continue unless there was a little yellow padlock on the screen, no one would ever get anything done. And that’s assuming that they know what the little yellow lock means, and can find it without their bifocals.

What this really points up the is the major shortcoming of nearly all current security models: they’re optional, and they rely on the end user–almost certainly the least knowledgeable party to the transaction–to ensure the security of the entire transaction. And it’s going to get a lot worse before it gets better. Ajax and our other “Web 2.0” technologies are directed at one goal: transferring information seamlessly. One of the key features of AJAX/DHTML is the ability to update pages without refreshes and transfer information to servers without requiring a “submit” click. The hidden cost there is that xhttp requests don’t just circumvent cgi form actions, they circumvent the “insecure submission” warnings of browsers.

The suolution here is pretty clear to me: abandon clear HTTP as a protocol. Modern server and client hardware could encrypt all, or at least most, traffic via SSL. Security would be the norm, not an anomaly, and more people might pay attention to the security warnings if they were out of the ordinary.