Perhaps you’ve noticed I haven’t posted here in quite a while. I am writing all new posts over on the HoneyApps blog. I’ll also be archiving my existing content over there as well. Please be sure to adjust your RSS feed readers if you haven’t already!
This post was originally posted on CSO Online here.
If you are working in an organization with any sizable technology infrastructure, it has probably become quite apparent that your vulnerability management program has a lot more “vulnerabilities” than “management”. I recently had an email exchange with Gene Kim, CTO at Tripwire, regarding this issue and he boiled it down much better than I had heard anyone do this before. To quote Gene, “The rate of vulnerabilities that need to be fixed greatly exceeds any IT organizations ability to deploy the fixes”. Continuing the conversation with Gene, he expanded on his comment:
“The rate at which information security and compliance introduce work into IT organizations totally outstrips IT organizations ability to complete, whether it’s patching vulnerabilities or implementing controls to fulfill compliance objectives. The status quo almost seems to assume that IT operations exist only to deploy patches and implement controls, instead of completing the projects that the business actually needs.
Solving the vulnerability management problem must reduce the number of calories required to mitigate the risks — this is not a patch management problem. Instead it requires a way to figure out what risks actually matter, and introducing mitigations that don’t jeopardize every other project commitment that the IT organization has, and jeopardize uptime and availability.”
Having lived through this vulnerability arms race myself, this statement really rang true. Gene and I clearly have a mutual passion for solving this issue. In fact, I would extend Gene’s sentiment to the development side of the house where application security vulnerabilities are piling up in the industry. Great, so now what?
My first reaction to this, being a bit of an admitted data junkie, was to start pulling some sources to see if my feeling of being buried was supported and accurate. This post is an oversimplified approach, but works for confirming a general direction.
Lets go to the data!
First, what types of vulnerability information are security teams generally dealing with? I categorized them into the following buckets: Custom Applications, Off The Shelf Applications, Network and Host, and Database. A couple of very large data sources for three of the four categories can be found via the National Vulnerability Database as well as the Open Source Vulnerability Database. Additionally during some research I happened upon cvedetails.com. To simplify further we’ll take a look at OSVDB, which has some overlap with the NVD.
Looking at the first four months of 2010 we can see OSVDB is averaging over 600 new vulnerabilities per month. That’s great but on average how many of these new vulnerabilities affect a platform in my environment? The cvedetails.com site has a handy list of the top 50 vendors by distinct vulnerabilities. Viewing the list, it’s a fairly safe assumption most medium and large businesses probably have 70% or more of the top 20 vendors (Note: one of several sweeping assumptions, plug in your own values here). This ends up being quite a large number even while ignoring the long tail of vulnerabilities that may exist within your organization.
One category of vulnerabilities not covered by these sources is custom web applications. These are unique to each company and must be addressed separately. To get a general sense of direction I turned to the 2008 Web Application Security Statistics project from WASC. According to the report, “The statistics includes data about 12186 web applications with 97554 detected vulnerabilities of different risk levels”. That equates out to about 8 unique vulnerabilities per application. My experience tells me the actual number varies GREATLY between these based on size and complexity of the application. One piece of information not included in the report is the actual number of companies, which would give us a better idea on the number of applications each company was managing. For this data, we can use the recent WhiteHat Statistics Report, “Which Web programming languages are most secure?” (registration required). While the report subject was focused on vulnerabilities of sites written in different programming languages, they were kind enough to include the following – “1,659 total websites. (Over 300 organizations, generally considered security early adopters, and serious about website security.)”. That’s about 5.5 web sites per organization. But wait a minute; if we ignore the various platform data and just look at the total number of vulnerabilities versus the total number of sites, the organizations are averaging over 14 vulnerabilities per site. Of course this is averaged over a 4 plus year time period so we need to analyze resolution rates to understand at any point in time what a team is dealing with. According to the report they range drastically anywhere from just over a week to several months or even remain unfixed. The organization’s development and build process will influence not only the rate of resolution but also the rate of introduction. Given the limited data sources covered here, it’s easy to assert the rate of introduction is still greater than the rate of resolution.
While I haven’t gone into enough detail to pull out exact numbers, I believe I satisfied my end goal of confirming what my gut was already telling me. Obviously I could go on and on pulling in more sources of data (I told you I’m a bit of a junkie). For the sake of trying to keep this a blog post and not a novel I must move on.
So how do we go about fixing this problem?
The vulnerability management problem is evolving. No longer is it difficult to identify vulnerabilities. Like many areas within technology, we are now overwhelmed by data. Throwing more bodies at this issue isn’t a cost effective option, nor will end in winning this arms race. We need to be smarter. Many security practitioners complain about not having enough data to make these decisions. I argue we will never have a complete set but we already have enough to make smarter choices. By mining this data, we should be able to create a much better profile of our security risks. We should be combining this information with other sources to match up against the threats of our specific business or organization. Imagine combining your vulnerability data with information from the many available breach statistic reports. Use threat data and stats that are appropriate for your business to determine which vulnerabilities need to be addressed first.
Also, consider the number of ways a vulnerability can be addressed. To Gene’s point above, this isn’t a patch management problem. Mitigation comes in many different forms including patches, custom code fixes, configuration changes, disabling of services, etc. Taking a holistic view of the data including a threat-based approach will result in a much more efficient remediation process while fixing the issues that matter most.
Additionally, consider automation. This has long been a dirty word in Information Security, but many of these problems can be addressed through automation. I have written about SCAP before, which is one way of achieving this, but not the only solution. Regardless of your stance on SCAP, utilizing standards to describe and score your vulnerabilities will give you a better view into them while removing some of the biases that may inject themselves into the process.
In summary, vulnerability management has become a data management issue. But the good news is, this problem is already being solved in other areas of information technology. We just need to learn to adapt these solutions within a security context. How are you dealing with your vulnerabilities?
Comments : 1 Comment »
Tags: vulnerability management
Categories : application security, Risk Management, security management, vulnerabilities, vulnerability scanning
Before you roll your eyes in disdain that this blog is perpetuating a meme equivalent of a chain mail, please understand I am posting this purely out of fear and self-preservation. After being “tagged” for this duty I expressed my concerns in participating via Twitter and received the following response from Erin Jacobs:
So… in an effort to control “my version of the truth”, as much as I don’t care to expose things not known about me (after all they are not known for a reason), here I go.
1. I have a FULL HOUSE. No, I’m not talking about poker. Literally, I’m married with 3 kids ages 6 and under, I have a dog, a cat and even fish (technically the fish is my daughters).
2. I come from a lineage of professional and semi-professional baseball players. I have had relatives who have played in both minor and major league baseball including my grandfather who pitched for a season with the Boston Red Sox and played with Ted Williams. I also played on the same baseball team in summer leagues with someone who eventually made it to the majors.
3. Speaking of lineage, I am #4 in a lineup of 5. I’ll let you figure that one out.
4. My first computer was an Atari 400 with a membrane keyboard and a whopping 4KB of RAM. My first two cartridges were Basic and Assembler. Later I added a modem and cassette drive. I was 9 years old, you do the math.
As part of this meme, I am now required to tag 5 additional people because that’s how these things work I guess (smell the enthusiasm?). So my apologies to: Mike (@rybolov) Smith, Ben (@falconsview) Tomhave, Trey (@treyford) Ford, Alex (@alexhutton) Hutton, and Dan (@danphilpott) Philpott.
Comments : 6 Comments »
Tags: meme, things I hate for $100 Alex
Categories : Uncategorized
Well another BlackHat is in the books and another round of vulnerabilities have been disclosed and bantered about. I was fortunate enough to be able to attend this year as a panelist on the Laws of Vulnerabilities 2.0 discussion. While I was happy and honored to be invited, I wanted to draw some attention to another talk.
No, I’m not talking about the SSL issues presented by Dan Kaminsky or Moxie Marlinspike. Nor am I referring to the mobile SMS exploits. Each year you can count on BlackHat and Defcon for presentations and displays in lots of interesting security research and incredibly sexy vulnerabilities and exploits. Every year attendees walk away with that sinking feeling that the end of the internet is nigh and we have little hope of diverting it’s destruction. But, despite this, we have not shut down the internet and we manage to continue to chug along and develop new applications and infrastructure on top of it.
I was able to attend a session on Thursday that explained and theorized about why this all works out the way it does. It was the final session of the conference and unfortunately was opposite Bruce Schneier, which meant a lot of people that should have seen this, didn’t. Of course, Bruce is a great speaker and I’m sure I missed out as well, but hey that’s what the video is for.
David Mortman and Alex Hutton presented a risk management session on BlackHat vulnerabilities and ran them through the “Mortman/Hutton” risk model – clever name indeed. They included a couple of real-world practitioners and ran through how these newly disclosed vulnerabilities may or may not affect us over the coming weeks and months. They were able to quantify why some vulnerabilities have a greater affect and at what point in time they reach a tipping point where a majority of users of a given technology should address.
David and Alex are regular writers on the New School of Information Security blog and will be posting their model in full with hopes of continuing to debate, evolve and improve it. Any of these new security vulnerabilities concern you? Go check out the model and see where they stand.
Note: This post was originally published on CSO Online.
Comments : 1 Comment »
Categories : Risk Management, security, security management, vulnerabilities
…And so does @rybolov. I don’t often do this, but the latest post on the Guerilla CISO blog is worth a re-post. Go check it out here. I have been talking about this a lot lately. SCAP is still coming into its own but has a lot of promise in helping security teams automate much of the vulnerability management and patching pains they experience today.
Comments : Leave a Comment »
Tags: hamster wheel of pain, patching, scap, vulnerability management
Categories : standards, vulnerability scanning
Quick schedule update: Looking forward to both of these events. Let me know if you’ll be at either and want to chat. Looking to fill my schedule up for these events.
Metricon 4.0: This should be a really good event. The full agenda is published here. I will be discussing the use of the Security Content Automation Protocol (SCAP) and the metrics being produced from this new view into the data. You can request participation via email here.
Hope to see you there!
Comments : Leave a Comment »
Tags: blackhat, metricon, metrics, scap, vulnerabilities
Categories : event, speaking engagements
In my inaugural post to this blog, I wrote about many of the religious wars that break out today regarding payment security and specifically PCI. In the post I mentioned the OWASP PCI project, which is a solid step in the right direction, but to be clear, payment security encompasses a lot more than just PCI. PCI does a decent job at setting an audit bar for merchants and service providers, but now I’d like to focus on the broader topic of card not present security.
Recently, I was lucky enough to participate and contribute to a new O’Reilly book, Beautiful Security. While I’d love to tell you my chapter out-shined them all, in reality Mark Curphey’s contribution on Tomorrow’s Security Cogs and Levers was brilliant. Since the publishing, I have been speaking to a lot of people on the topic of payment card security and what I felt were some of its fundamental flaws that needed to be addressed. In my view, the root cause of many of the security pains around online payments is the reliance on a shared secret that is ultimately shared with hundreds or even thousands of parties within the life of a card. If there is a security crack in the armor within even a single organization of these thousands of handlers, the card account may become breached. Within my chapter, I laid out seven fundamental requirements for a new payment security model. They are:
1. The consumer must be authenticated
2. The merchant must be authenticated
3. The transaction must be authorized
4. Authentication data should not be shared outside of authenticator and authenticatee
5. The process must not rely solely on shared secrets
6. Authentication should be portable
7. The confidentiality and integrity of data and transactions must be maintained
OK, so none of these are a revelation, you knew this already. Well that’s why I am posting this here. I have since converted my Beautiful Security contribution into a wiki found here. My original writing is a high level design and we now want to take this to the next step. I am certainly not foolish enough to believe there are no flaws within it, nor is it detailed enough yet to implement. This is where the security and payments folks come in. This a call to action to read through the wiki, update it, and begin to flash out the details that could turn this into an actionable payment security system that could be implemented. There’s a quick summary of the goals on the wiki home page to address where we are heading. But hey, this is a wiki, so those can change too! If you have some expertise in online payments or information security (I know you do, that’s why you’re here), please take the time to help out and contribute.
Note: This post originally published on CSO Online.
Comments : 1 Comment »
Tags: cso, O'Reilly, payment security, wiki
Categories : compliance, PCI, security, standards