Saturday, December 29, 2007

AV Signature False Positives

Kaspersky's AV accidentally identified the Windows Explorer process as malware. The same thing happened to Symantec with their Asian Language Windows customers. And Heise is running an article on how AV vendors' ability to protect has decreased since last year.


The problem with these commercial, signature-based, anti-malware solutions is that they work 1) Backwards, and 2) Blind. They operate "backwards" in the sense that they are a default-allow (instead of default-deny) mechanism-- they only block (unless they screw up like this) the stuff they know all of their customers will think is bad. And they operate "blind" in that they don't do any QA on their code in your environment. If you think about it, it's scary: they apply multiple (potentially crippling as evidenced by these recent events) changes to production systems, in most organizations several times per day without proper change control processes. Besides anti-malware, what other enterprise applications operate in such a six-shooters-blazing, wild west cowboy sort of way?


Surely this is one more nail in the signature-based anti-malware coffin.

Tuesday, December 11, 2007

OpenDNS - I think I like you

I think I really like OpenDNS. It's intelligent. It's closer to the problem than existing solutions. And it's free.


OpenDNS works by using Anycast to redirect you to the best DNS servers based on where you are. But before it quickly gives you your response, it can optionally filter out unwanted content. OpenDNS partners with communities and service providers to maintain a database of adult content and malicious websites. If you choose to opt in, each DNS query that matches a known bad site returns your browser to a customizable page that explains why the page is not allowed.

Now, privacy advocates are well aware that there is a potential data collection and use problem. However, DNS queries already are a privacy risk, since an ISP can create quite the portfolio on you based on which names get resolved to numbers. OpenDNS can collect information about you, including statistics associated with DNS usage on the networks you manage, but that choice is not turned on by default-- you have to opt into it as well. So, all things considered, privacy is well managed.

I really like this approach to filtering unwanted HTTP content because it completely prevents any connection between clients and offending servers. In fact, clients don't even get to know who (if you can allow me to personify servers for a moment with the term "who") the server is or where it lives. But what I like even more is that this service is simple. There are no complicated client software installs (that users or children can figure out how to disable), no distributed copies of offending URL databases to replicate and synchronize, and no lexicons for users to tweak. It's lightweight. All it takes is updating a DHCP server's entries for DNS servers to point to 208.67.222.222 and 208.67.220.220 and checking a few boxes for which content is needed to be filtered in an intuitive web administration console. For a home user, that's as easy as updating the DNS server fields in a home router-- and all current and future clients are ready to go. An enterprise could use this service as well as its DNS Forwarders. And many larger customers do. A non-tech-savvy parent could turn on content filtering without the "my kids program the VCR" syndrome resulting in the kids bypassing the filters. Setting an IP Address for a DNS server doesn't stand out as a "net nanny" feature to kids who are left alone with the computer.

Use OpenDNS
Okay, there have to be caveats, right? Here they are ...

If you're planning on using some third-party DNS service--especially one that is free-- it had better be performing well and it had better be a service that you trust (because DNS has been used in the past to send people to malicious sites). Since their inception in July 2006, OpenDNS has serviced over 500 Million DNS requests with a 100% perfect uptime track record. And from their open, collaborative stance on issues like phishing (see phishtank.com), you'll want to trust them.

Any DNS misses (except some common typos) will return you to an OpenDNS web page that tries to "help" you find what you missed. The results look like re-branded Google results. Users taking links off the OpenDNS results page is how OpenDNS makes their revenue--on a pay per click basis. That's how they keep the services free.

Dynamic IP Addresses can mess up a home user's ability to keep content filtering policies in-check (but this won't affect enterprises). But there are a number of ways to keep the policies in-synchrony, including their DNS-O-Matic service. What I'd like to see added on: native consumer router support for Dynamic IP address changes to keep content filtering policies in place no matter what the ISP does. [The Linksys WRT54G wireless router, for example, supports similar functions with TZO and DynDNS today-- it would be nice if OpenDNS was another choice in the drop-down menu.] If my neighbor enrolled in the service, it might be possible for me to get my neighbor's OpenDNS filtering policies if we share the same ISP and Dynamic IP pool, but again, that's what the dynamic IP updating services are for.

Enterprises who decide to use OpenDNS for their primary outgoing DNS resolvers must keep in mind that an offending internal user could simply specify a DNS server of their preference-- one that will let them bypass the content filters. However, a quick and simple firewall policy (not some complicated DMZ rule) to screen all DNS traffic (UDP/TCP 53) except traffic destined for OpenDNS servers (208.67.222.222 and 208.67.220.220) will quell that concern.

So the caveats really are not bad at all.

Since the company is a west coast (SF) startup and since the future seems bright for them as long as they can keep their revenue stream flowing, I imagine they'll be gobbled up by some larger fish [Google?].


So this Christmas, give the gift of safe.




...
This might seem like a blatant advertisement, but (number one) I rarely like a service well enough to advocate or recommend it and (number two) I am not financially affiliated with OpenDNS in any way.

Monday, December 10, 2007

Gary McGraw on Application Layer Firewalls & PCI

This serves as a good follow-up to my dissection of Imperva's Application Layer Firewall vs Code Review whitepaper.

Gary McGraw, the CTO of software security firm Cigital, just published an article on Dark Reading called "Beyond the PCI Bandaid". Some tidbits from his article:

Web application firewalls do their job by watching port 80 traffic as it interacts at the application layer using deep packet inspection. Security vendors hyperbolically claim that application firewalls completely solve the software security problem by blocking application-level attacks caused by bad software, but that’s just silly. Sure, application firewalls can stop easy-to-spot attacks like SQL injection or cross-site scripting as they whiz by on port 80, but they do so using simplistic matching algorithms that look for known attack patterns and anomalous input. They do nothing to fix the bad software that causes the vulnerability in the first place.
Gary's got an excellent reputation fighting information security problems from the software development perspective. His Silver Bullet podcast series is one of a kind, interviewing everyone from Peter Neumann (one of the founding fathers of computer security) to Bruce Schneier (the most well known of the gurus) to Ed Felten (of Freedom to Tinker and Princeton University fame). He is also the author of several very well respected software security books.

Thursday, December 6, 2007

Salting your Hash with URLs

So, when I was reading this post on Light Blue Torchpaper (Cambridge University' Computer Security Lab's blog) a few weeks back, I, like many others (including this Slashdot thread), was reminded about the importance of salting your password hashes ... As it turns out, you really can ask Google for a hash value and it really will return significant results-- like a gigantic, easy-to-use Rainbow Table. Steven Murdoch managed to find "Anthony" with this simple search.

Of course, though, if my story stopped there it would not be that interesting. Security professionals have known about salting hashes to get around known hash values in tables for a long time. But as I thought about the salt hashing, it dawned on me the parallels with some password management tools.

I had been wanting to reduce the number of passwords I have to keep track of for all of the various web applications and forums that require me to have them. I have used Schneier's Password Safe for years now and find it nice, but not portable to other platforms (e.g. mac/linux). Even moving between different PCs is difficult because it requires keeping my password safe database in synchrony. Of course, several browsers have the ability to store passwords with a "master password", but I have several objections to them. First, they are part of the browser's stack, so I have to trust that my browser and the web applications I use won't result in an opportunity for malware to exploit a client-side bug to steal my passwords. Second, they don't tend to work well when moving from machine to machine, so there's a synchrony problem. So, I am always on the lookout for a good alternative. Perhaps one day an authentication system like Info Cards will become a reality in the consumer-space ...

So, when I first stumbled upon Password Maker, an open source password management tool, I wanted to like it. How does it work? From their description:
You provide PASSWORDMAKER two pieces of information: a "master password" -- that one, single password you like -- and the URL of the website requiring a password. Through the magic of one-way hash algorithms, PASSWORDMAKER calculates a message digest, also known as a digital fingerprint, which can be used as your password for the website. Although one-way hash algorithms have a number of interesting characteristics, the one capitalized by PASSWORDMAKER is that the resulting fingerprint (password) does "not reveal anything about the input that was used to generate it." 1 In other words, if someone has one or more of your generated passwords, it is computationally infeasible for him to derive your master password or to calculate your other passwords. Computationally infeasible means even computers like this won't help!
But, as I said: "I wanted to like it." After all, there is nothing that has to be stored ever-- you just have to remember the main password and the algorithm does the rest. There is nothing that has to be installed directly into the browser (unless you want to), not too mention it's very portable from platform to platform. And since there is no password database or safe to move around, there's no synchronization problem-- the site specific passwords are re-created on the fly. It sounds like a panacea. In the algorithm, the URL essentially becomes a salt with the master password as the hash input. The resulting hash [sha1 (master password + URL)] is the new site-specific password. It sounded like a great solution, but I have a couple potentially show-stopping concerns over it.
  1. There are varying opinions, but it is a prudent idea to keep salt values secret. If the salt value becomes known, a rainbow table could be constructed that employs the random password key-space concatenated with the salt. Granted, it might take somebody several days to create the rainbow tables, but it could be done-- especially if there were economic incentives to do so. Imagine that an adversary targets the intersection of MySpace and Paypal users, also assuming (of course) that this Password Maker is at least somewhat popular. Sprinkle in some phishing, XSS, or whatever is needed today to capture some MySpace passwords (which are of considerably lower value than, say, PayPal) to compare against the rainbow tables, and ... Voila ... the adversary now has the master password to input into the Password Maker's scheme to get to access to PayPal.
  2. I am not a mathematical/theoretical cryptographer, but I know better than to take the mathematics in cryptographic hash functions for granted. There has not been much research in hashing hash values, at least not much that has entered the mainstream. And as such, it may be possible to create mathematical shortcuts or trapdoors when hashing the hash values, at least potentially with certain types of input. That is not to be taken lightly. I would not build any critical security system on top of such a mechanism until there was extensive peer review literature proclaiming it a safe practice (also read the section entitled "Amateur Cryptographer" of this Computer World article).
In summary, the Password Maker is an intriguing idea, perhaps even novel, but I wouldn't use it for passwords that get you access to anything of high value. For mundane sites (ones where a compromised password is not a huge deal), it's probably a decent way to manage passwords, keeping separate passwords for each site.

Tuesday, December 4, 2007

Client Software Update Mechanisms

It's 2007. Even the SANS Top 20 list has client-side applications as being a top priority. Simply put, organizations have figured out how to patch their Microsoft products, using one of the myriad of automated tools out there. Now it's all the apps that are in the browser stack in some way or another that are getting the attention ... and the patches.

Also, since it's 2007, it's well-agreed that operating a computer without administrative privileges significantly reduces risk-- although it doesn't eliminate it.

So why is that when all of these apps in the browser stack (Adobe Acrobat Reader, Flash, RealPlayer, Quicktime, etc.) implement automated patch/update mechanisms, that the mechanisms are completely broken if you follow the principle of least privilege and operate your computer as a non-admin? Even Firefox's built-in update mechanism operates the exact same way.

So, here are your options ....

1) Give up on non-admin and operate your computer with privileges under the justification that the patches reduce risk more than decreased privileges.

2) Give up on patching these add-on applications under the justification that decreased privileges reduce more risk than patching the browser-stack.

3) Grant write permissions to the folders (or registry keys) that belong to the applications that need updates so that users can operate the automated update mechanisms without error dialogs, understanding that this could lead to malicious code replacing part or all of the binaries to which the non-admin users now have access.

4) Lobby the vendors to create a trusted update service that runs with privileges, preferably with enterprise controls, such that the service downloads and performs integrity checking upon the needed updates, notifying the user of the progress.

Neither option 1, nor option 2 are ideal. Both are compromises, the success of each depends heavily upon an ever-changing threat landscape. Option 3 might work for awhile, particularly while it is an obscurely used option, but it's very risky. And option 4 is long overdue. Read this Firefox, Apple, Adobe, et al: Create better software update mechanisms. Apple even created a separate Windows application for this purpose, but it runs with the logged-in user's permissions, so it's useless.


...
And this is not even dealing with all of the patching problems large organizations learned while introducing automated patching systems for Microsoft products: components used in business-critical applications must be tested prior to deployment. These self-update functions in the apps described above have zero manageability for enterprises. Most of these products ship new versions with complete installers instead of releasing updates that patch broken components. The only real option for enterprises is to keep aware of the versions as the vendors release them, packaging the installers for enterprise-wide distribution through their favorite tool (e.g. SMS). It would be nice if these vendors could release a simple enterprise proxy, at least on a similar level to Microsoft's WSUS, where updates could be authorized by a centralized enterprise source after proper validation testing in the enterprise's environment.