tracker

TQMC

TQMC has acquired wide Domain Knowledge and Experience. You can FREELY access it here and here

DISCLAIMER: This matter here is a guide only. For authentic and up-to-date information, please contact TQMC.

The DIRECTIVES and STANDARDS listed here may have been subsequently REVISED . You must refer to the CURRENT REVISION and AMENDMENTS if any.

Monday, August 10, 2009

RISK Analysis


Becoming relevant…

Just a short note to let you know that I’ll be giving a webinar for BrightTALK’s Risk Management Summit on August 27 (http://www.brighttalk.com/summit/riskmanagement). The topic will be a “case study” of the benefits and challenges I encountered while transforming an infosec organization into a information risk management organization within a Fortune 100 financial services company. Needless to say, some parts of it may be a bit… umm… controversial. Hope you can join us.

Cheers,

Jack

The Curious Case of Asset Valuation

I recently had a discussion with someone about how to do asset valuation for risk assessments. It was a good discussion that prompted me to share with you. The whole concept of asset valuation (as it exists for information security) is predicated on the assumption that acquisition cost is a good constituent factor of security risk. So, how do we evaluate the asset valuation landscape?

Let’s start with our international standard for risk assessments: ISO 27005. There is a relatively lengthy discussion of asset valuation in Appendix B.2 (read here for Alex’s 27005 review). This is encouraging, however the discussion very quickly devolves from “what does it cost to replace this” to what they term “consequences.” Consequences are what happen as a result of having the asset. What does this mean? Well, they offer a list of things that may help (they make a point to let you know this might not be a complete list):

  • Interruption of service
  • Inability to provide the service
  • Loss of customer confidence
  • Loss of credibility in the internal information system
  • Damage to reputation
  • Disruption of internal operation
  • Disruption in the organization itself
  • Additional internal cost
  • Disruption of a third party’s operation
  • Disruption in third parties transacting with the organization
  • Inability to fulfill legal obligations
  • Inability to fulfill contractual obligations
  • Danger for the organization’s personnel and / or users
  • Attack on users’ private life

…and this isn’t the complete list.

What strikes me as a FAIR practitioner, is that this list is void of any taxonomy. In other words, there’s no categories, just a list of specific types of incidents (that surely isn’t comprehensive). I jotted down a quick mapping to FAIR loss categories and noticed that most map to secondary loss categories, strengthening the Chicken Little security practitioner’s view of the world.

So FAIR practitioner’s are at an advantage when speaking about asset valuation, because we don’t get caught up in the existential discussion about an asset’s “consequences,” and likewise we don’t narrow our focus to just replacement cost. We have our capacious list of loss categories, and we rely on the scenario to help guide our Probable Loss Magnitude discussions. Because asset valuation (per my strict vantage point of replacement only) is myopic, the bigger, more important discussion is what the losses look like. Certainly, these are consequences, but the perspective is different. Instead of listing every possible loss for a given asset, we only estimate the losses for an asset in a given scenario. There may be many scenarios, but funneling our thoughts to the specific saves us from wondering what effect nuclear fallout will have on our database servers.

The six loss categories are great fodder for really interesting discussions that bring the security and risk practitioner closer to the business and this is key: you can get asset valuation from the balance sheet; true business risk often sits in the heads of technical, managerial, and executives in the business. What this means is that risk (and infosec really) needs to be the (or at least a) bridge between IT and the business.

Some thoughts on “Physics Envy”

My apologies. I know I’m already late in writing a follow-up to my post on aggregate risk, but client work has been fast and furious lately (a Good Thing). I’m still working on it though. That said, I just can’t resist the need/desire to comment on someone else’s misleading post on the subject of risk analysis.

This time it’s Richard Bejtlick, who appears to be on the warpath again. And, while Richard is extremely intelligent and well established as an expert in many security-related matters, I’d argue that he’s not an expert in risk. It’s clear that he reads on the topic, but he appears to interpret it through an infosec lens which, I believe, tends to be badly distorted by some unfortunate/inaccurate biases in the industry. That said, let’s examine some of his concerns and see what we can learn…

“False precision”

On this point, Richard and I agree — the notion of precision in risk analysis (whether infosec-related or some other form of risk) is absurd. The future is uncertain, and risk analysis is fundamentally a discussion of the future. A precise statement (i.e., prediction) of exactly when something will happen, or how often, or to what effect, just isn’t feasible in a complex problem space like risk. Where Richard and I appear to differ, however, is in our understanding of what risk analysis is and isn’t.

Risk analysis isn’t (or shouldn’t be) put forth as a prediction of the future, but rather as a statement of probabilities given what’s known or believed. Much like a statement that there’s a 1/36th probability of rolling snake-eyes given that a pair of dice has six sides each, and that the dice have independent probabilities. Nobody in their right mind would believe they could predict on which roll the dice will come up snake-eyes, but it’s still very useful as a decision-maker to know what the probabilities are.

Analysis results also should reflect the degree of (un)certainty involved, so that people making decisions based on the analysis have realistic expectations. This is where the use of distributions and ranges become very useful in portraying uncertainty and imprecision. However, if I read Richard’s post correctly, he considers all risk analyses to be useless because they can’t predict the future (i.e., aren’t precise).

This “if it ain’t precise, it ain’t useful” position is one I run into frequently in the infosec community, presumably because the profession is made up of so many people with engineering backgrounds who are used to measuring things relatively precisely. Or, maybe, Richard’s just pointing out that many risk analyses are flawed because they don’t do a good job of conveying the degree of imprecision involved. He’s really not clear on that, and seems to paint the entire issue with a single broad brush stroke.

“Overweighting things that can be counted”

Here again, I tend to agree with Richard about the fundamental problem. There is an unfortunate tendency to look around us for things that can be easily counted, and then assume that they comprise the whole picture. This may not be as significant a problem if you’re dealing with something that has a large volume of relatively clean data to work from, but is a huge problem when good data is sparse. For example, if I want to construct models of human life expectancy in order to profit as a life insurance provider, I can probably find enough good data to do so. Unfortunately, in the infosec realm, good data is harder to come by. As a result, models derived from available hard infosec data are much less likely to be complete/accurate.

What I’ve described above, however, is an inductive approach to modeling — i.e., evaluate data to derive a model. The other approach to modeling is deductive. I’ll spare you a long-winded comparison and let you research these further if you’re interested but, simply stated, a deductive approach constructs a model based on logical (and believed to be) true relationships between premises. For example, a model that states “Loss events are predicated on threat events and vulnerability to those threat events” is logical and “true”. We didn’t need data to construct that model — it just makes sense logically.

Does that mean that deductive models are always accurate? Heck no. They’re subject to potential problems too, but if they’re well thought-out they’re less likely to have the gaps an inductive model built on sparse data is likely to have. Deductive models also act as a guide to knowing what data we need in order to perform analyses.

Keep in mind though, that “All models are wrong (i.e., imprecise), some are useful“. As I’ve said before, the world is far too complex to model exactly. Nonetheless, we should be looking for accuracy and a useful degree of precision in our models, which is entirely feasible. Also, all models should be flexible enough to be adjusted as data and experience improves.

“Man with a spreadsheet syndrome”

Richard’s basic concern is valid — spreadsheets often connote a sense of validity that isn’t always warranted. It isn’t logical, however, to conclude that because some spreadsheets (or other quantitative tools) are flawed, all must be. Maybe that isn’t what he meant, but Richard tends to use a broad brush, so it’s hard to tell sometimes what he’s really saying.

At the end of the day, the validity of a spreadsheet boils down to model accuracy and the quality of data. Since we’ve already covered models, I guess it’s time to cover data…

“A lot of guessing”

As Alex states, Richard (and others) toss the term “guessing” around like it’s an insult. If what Richard means by “guessing” is “estimates made in the absence of perfectly complete and precise data“, then welcome, Richard, to reality. All measurements in the real world are guesses to some degree. I assume, however, that what Richard is concerned about is whether the estimates (guesses) are accurate. The answer to that, of course, is “it depends”.

If someone asks me what the wingspan of a 747 airliner is, and I answer “Ummmm, I dunno. A hundred feet?“, then maybe we have a problem. For one thing, I’ve given a relatively precise answer (100 ft), but that answer may not be accurate. If, however, I answer “Well, the wingspan is almost certain to be less than the length of a football field (300 ft) but greater than the length of my driveway (80 ft)” then I’ve made an estimate that isn’t precise but is much more likely to be accurate. With a little work, I can probably narrow the range significantly (i.e., get better precision) and still be accurate (especially if I have access to a subject matter expert). The question of whether it’s precise enough (i.e., is useful) is a matter of what I need to use the information for.

Business decisions of almost any sort are based on imperfect and imprecise estimates of what might happen. That’s reality. As long as decision-makers are aware of and okay with the imprecise nature of the information they’re operating from, then it’s not a problem.

What people seem to forget is that whether we perform formal analysis on a problem or not, a decision is still going to be made. The question then becomes, is the decision-maker going to be using conclusions drawn from:

  • Someone’s unstructured, undocumented, mental model and the “guesses” they apply to it, or
  • A structured model that has been documented, examined, and evolved through use, and “guesses” that have been given due consideration

Either way, some model will be applied and guesses/estimates will be used. The point of analysis is to give decision-makers better information than they would have had in the absence of analysis.

Bottom line — it seems like Richard has accurately recognized the existence of some of the fundamental challenges in risk analysis, but it feels like he’s drawn some extreme conclusions about their significance and the ability to effectively deal with them. It could be, of course, that the problem is simply the manner in which he described his conclusions. Perhaps he’ll respond and clear it all up.

It’s a FAIR Pandemic…

RMI welcomes Jack Freund to the RiskAnalys.is blog…

Once again the 24-hour news cycle is buffeting us with “information” about the new risk that will surely end us all. I’ve received several “breaking news” stories in my email about the pandemic in the last few days. Russia is now taking the step of banning meat from several States and nations (despite it not being transmitted by meat). Clearly, we’re being told this is a serious risk.

Several years ago in a previous life, I participated in an avian flu preparation exercise in my large, telecom manufacturing company. The goal was to determine the extent to which a large-scale pandemic would disrupt business operations, and what kind of controls could be put into place to minimize them. This primarily took the form of a questionnaire that was given to every employee. When it came my turn to fill it out and I got down to the section where I had to decide if I could do my job from home or not, I quickly choose yes. Even if I couldn’t, I thought, I’d be a fool to say so and be forced to come into the office with all those other sick people.

I am certainly unqualified to comment on whether this is or will be the next new plague. However, let’s look at this from the perspective of your own organization and how this might contribute to your risk analyses.

The swine flu will effect most organizations in one direct way, and some other, not-so-direct ways. This is mostly a business continuity risk. If you are following a business continuity management system program (such as BS 25999) you should have identified the parts of your business that are a priority (typical large revenue-generating parts of your business). Remember that in a disruption, cash flow is paramount. Creating a business process map of these parts of your business will help when framing the risk scenarios that will be analyzed by FAIR.

So, what will these risk scenarios look like?

Well, clearly any part of the business that needs people to operate it, or needs people to input something manually has the potential to cause a disruption. This is anything from manufacturing, sales, processing, etc. Any part of your business that depends on people contact (such as retail sales) will also be affected.

What the process map will also tell you is that there are several external dependencies that need to be considered. These are the not-so-direct ways I mentioned above. Namely, reliance upon other businesses (B2B) will be at the mercy of those organizations’ continuity plans (be sure to do your second party audits). Anything from service, support, shipping, and supplies will be disrupted.

The loss magnitude side of the FAIR equation will have the following trajectory. There will be heavy losses on the productivity side of the equation. Clearly, if your people can’t come to work–either because they are too sick, frightened, quarantined, under government order not to leave their homes, or deceased (let’s hope not this one), work doesn’t get done. Response costs will exist, if for no other reason than the activation of your BC plans. Replacement costs are a tricky one for me. The most obvious control would be to have an alternate work facility that isn’t geographically near your primary one. However, this flu has already hit several countries and States. I’d be interested in hearing from you in the comments what you think about this one. For some organizations the answer will be to take the hit. On the secondary loss side, there are some issues. I doubt that there will be litigation (most contracts have a force majeure clause that would be argued). If the impact of the flu is evenly dispersed, then competitive advantage and reputation become less impactful.

The biggest takeaway is that the traditional ways of dealing with this type of outage (work from home and hot/warm/cold sites) may also be effected by the same outage. It’s easy to dismiss this risk as a low frequency high loss event, but that is exactly why FAIR has the unstable risk element. It shouldn’t be written off as crazy end-times talk, but should also be taken with a grain of salt.

There are many other interesting risk angles in this event. For instance, how the recent memory of the 1918 avian flu goaded then President Ford into calling for nationwide vaccination, how in so doing the pharmaceutical companies had to forgo work in other areas, and how the side effects of the vaccine caused Guillain-Barré syndrome in many. Some believe that the cost to human lives was greater from the vaccine than would have occurred from the flu itself.

As for me, I’ll be stocking up on paper face masks, duct tape, and Pop Tarts. I’ll read your comments from my fortified basement bunker…

Pair of Jacks

Please join me in welcoming Jack Freund as a contributor to this blog. Jack is a certified FAIR analyst and has a boatload of experience in the information security profession. Welcome Jack!

Aggregate analysis (or measuring the surface area of Long Island)

One of the questions I commonly encounter is “How do you take something like FAIR and apply it to a big problem, like measuring the aggregate risk within an entire organization?” In order to keep this post from becoming too long, I’ll focus on the concepts of one approach in this post, and show a simple example in a following post.

Measuring the surface area of Long Island

Imagine that you’ve been given the task of determining the surface area of Long Island. How are you going to go about it? One way would be to use a satellite photo like the one below and make a rough estimate based on the level of available detail and an appropriate scale.

With that information you’d probably report that the surface area is between X and Y square miles, with Z% of certainty. If this level of precision isn’t good enough, you might want to carve the area into chunks, zoom in, measure each chunk, and then add them all up.

If even more precision is needed, you can carve the landscape into finer chunks…

You could also take surveying equipment and go about the process of making measurements on the ground.

Extrapolating these approaches, it’s conceivable (given sufficient time and tools) that you could try to measure the sub-atomic particles that make up the atoms, molecules, and objects that constitute the island’s surface area. Let me know how that goes BTW…

Regardless of your level of abstraction and your approach, two things are clear:

  • There is a point of diminishing returns when it comes to measurement precision, and that point is found at the balance between how the measurement is going to be used and the cost/effort in making the measurement
  • There is always some degree of uncertainty and imprecision. Even at a sub-atomic level of measurement, the effects of erosion and other dynamic processes guarantee that by the time you’ve finished measuring, the subject being measured will have changed

Measuring aggregate risk

In tackling the problem of aggregate risk, we likewise can carve up the landscape into chunks that are measurable and meaningful given our time and resources. One example might be to define a set of object groups such as:

  • Network devices
  • Network transmission media
  • Servers
  • Personal systems
  • Applications
  • Printed media
  • etc…

Harkening back to our notion of abstraction, you could carve the landscape even finer. Perhaps, for example, defining subsets of servers based on operating system, function, location, line of business, etc. Bottom line — you’ll want to define a level of abstraction that provides useful information given the available time and resources.

In order to perform a risk analysis, we also have to define our threat landscape. This, too, needs to be defined at an appropriate level of abstraction. For example, a high level breakdown might look like:

  • External hackers
  • Disgruntled employees
  • Contractors
  • etc…

You can also define threats that aren’t malicious, such as….

  • Acts of nature
    • Weather
    • Geological disturbances
    • Extraterrestrial (No - not ET. I mean things like solar flares, etc.)
    • Animals (non-humans)
  • Errors & failures
    • Employees
    • Service providers
    • Suppliers

… and/or become even more granular in your definitions.

With your landscape defined, you’re in a position to begin a series of high-level FAIR analyses.

Things to keep in mind:

  • It doesn’t matter what level of ”elevation” you’re operating from, all measurements are estimates, which means there’s always some degree of uncertainty and error. Therefore, the question isn’t whether uncertainty and imprecision exists in a measurement, it’s a question of whether the degree of certainty and precision is acceptable.
  • The acceptability of a measurement isn’t for the measurer to decide. It’s up to the people who are making decisions based on the information. What’s important is that they understand the degree of (un)certainty and (im)precision in the measurements being provided.
  • Precision is often (always?) a function of the size, complexity, how dynamic the problem is, the quality of available tools and methods, and the time spent measuring. In other words, the more precision desired, the more time/money that’s going to have to be spent.
  • Accuracy and precision aren’t the same thing. I can be precise and not be accurate. For example an estimate that my 2009 income is going to be $2,352,004.32 is highly precise. Unfortunately, it’s not even close to being accurate (unless a miracle occurs). Conversely, an estimate that my income will be between $50k and $500k isn’t very precise, but it’s likely to be accurate. In an ideal world we could be accurate and highly precise. In the real world, at least when measuring risk, you want to be accurate and have an acceptable degree of precision.
  • When performing an aggregate risk analysis, start at a relatively high level of abstraction and let the results from that analysis guide you regarding where to dive deeper. This helps to ensure that you find a feasible level of precision while managing your time and effort effectively.
  • Were we supposed to measure surface area at high tide or low tide? The point is, it is crucial to be very clear about what’s in and out of scope within the analysis, as unstated and misaligned assumptions can result in measurements that don’t meet management’s objectives.

Next week — A simple example.

Load of Tosh?

Long time no post… My sincere apologies, and I hope someone out there is still interested. I guess I needed a little prodding, which Stuart King so kindly provided. I’ve provided a response on his site, which I won’t repeat in its entirety here. Suffice it to say that Stuart seems, on occasion, to rant about things he’s not fully informed on. My friend Chris offers a couple of very good posts in response to Stuart as well.

On to something a bit more useful and, hopefully, interesting. An acquaintance of mine recently showed me the controls analysis model he and his colleagues had developed. It was very complex, full of interesting and sophisticated formula, and… weighted variables. Argh. I’ll save my comments regarding sophisticated formula for another post. As for weighting, for those of you who haven’t heard me rant on this before, I’ll summarize: weighted variables seem to me simply another way of saying “We believe X is more important than Y, but we haven’t taken the time to figure out why or by how much“. In the absence of a clear underlying explanation for why/how much, I think it would be very difficult to rationally defend any degree of weighting that’s been applied.

Another problem is that weights are, in many cases, dependent on context. For example, in his model, he had weighted authentication as more important than logging and monitoring. And intuitively we might think, “Yeah, that seems right”. But what about the scenario where the threat agent is the privileged insider - the person that legitimately has credentials and access? In that case, wouldn’t logging and monitoring be more important?

Bottom line - the concept of weighting is attractive but problematic. If you’re going to use it, recognize some of the challenges that come with it. Oh, and let me know if you’ve developed a set of rationale that you use to support the specific weights you’ve applied.

Alex

Those of you who are familiar with this blog probably recognize Alex Hutton as THE voice of RMI and FAIR, and for good reason. For over two years now, Alex has earned a reputation as a spirited and thought-leading blogger who regularly pushed the boundaries of conventional wisdom, seeking to help the industry evolve. Unfortunately for RMI, Alex has recently chosen to accept a position with Verizon. To Verizon, I offer my sincere congratulations. For myself, Alex has represented a marvelous business associate and continues to be a dear friend. The good news is that Alex still intends to blog about FAIR and risk, and so the industry will continue to benefit from his wisdom and insight (and maybe a little incite), and he’ll still be an advisor to RMI so I’ll continue to have the opportunity to bounce my ideas off him. Please join me in wishing Alex the best of luck in his new position and adventures.

As for this blog, Alex has expressed an interest in still contributing when possible. I’ll also contribute more frequently in hopes that we’ll continue to make it worth your while to visit.

Sweet Giveaway: Personal Honey Point License

I have Five licenses for MicroSolved’s Personal Honeypoint Honeypot product to give away. I’m using the OSX version right now at a coffee shop. From what Brent Hustontells me, you can even arm this thing with their “defensive fuzzing” plug-in. Great opportunity to get some TEF numbers against your laptop!

It’s available for OSX, Linux, and Windows.

First 5 comments leaving me their email address (don’t do it in the comment, just in the form so nobody else sees it) gets a license.

Potpurri: Ponemon, Payment Professionals, Perimeters, & Pete Lindstrom

Today’s blog post is a quick catch up post on several fronts.

I LIKE PROFESSIONAL ASSOCIATIONS

First, Chris Hayes, David Mortman and I had the honor of being bought dinner by Mike Dahn. Mike’s right, he and I are working towards the same goal, he’s just brilliantly practical about PCI and helping people. To that extent, if you’re not aware already, I wanted to mention that Mike’s working with the Society of Payment Security Professionals. And I think that’s awesome.

I’ve been known to really enthusiastic with my support for professional associations that focus on specific verticals add a lot of networking value. Like Kelly Dowell’s CUISPA, As you think about the four landscapes we risk managers need visibility into (Loss Magnitude, Threat, Controls, & Assets) - professional associations can really add value in that there are informative similarities that can be shared when professionals are able to really establish trust relationships. Let me encourage those of you with PCI concerns to look into The Society of Payment Security Professionals as a means to share information and maybe even gain some representation. Also, Mike’s a great, very smart guy.

I’M SKEPTICAL ABOUT INCIDENT LOSS VALUES SOMETIMES

The Ponemon study is out. Check out Adam’s prelim. analysis at EmergentChaos if you haven’t already.

DOCUMENTS FOR THE JERICHO-IZATION OF YOUR NETWORK

Via Crowmore.se: Onewalldown.com and Cap Gemini have started publishing free .pdf’s on what Jericho networks look like. Note to Cap Gem: If you want me to read stuff that is marketing, putting dark text on a colored, semi-opaque background is ok. If you want me to read a white paper, please make it easier to read.

My aesthetic whining aside, these are good, important documents. If you haven’t thought about Trust Brokering, and you’re a security architect - you need to start.

PETE LINDSTROM HAS BEEN ON A ROLL

He and I don’t see eye-to-eye on everything, but he’s been posting well if you haven’t been reading them, and his last post on Thinking Strategically About Security Metrics is a starter and worth you’re read.

I like my four landscapes as a source of strategic thinking rather than his four strategic metrics sources - but they’re not at all dissimilar enough to keep us from thinking about how they might relate to each other.




SOURCE

No comments:

Post a Comment