Watch the episode on YouTube.
Reasonably Accurate 馃馃 Transcript
Good morning everyone. How's it going? Um, third day in a row, uh, for mornings with Mark, I'm still forgetting not to shake the camera. I'm still working out that intro. Um, it looks cool on my side. It's not kicking out on the stream. No idea why gonna tackle that later on today, some adjustments to lighting, made some adjustments to the sound.
I think we're doing good here. Um Thanks for joining again. Um, you know, uh the first couple of days we talked about perspectives yesterday, we delve in a little bit to the Olympic hack and that really cool response from them basically saying we're not super worried about attribution at the moment. Um We're gonna worry about having games run smoothly.
Um, and then we'll figure it all out after the fact and that's a really, um, mature response from a security program. Um, but one of the things that I wanted to talk about that we think is mature, that's not really that mature is how we calculate risk or how we adjust for risk.
So we've been talking about perspectives and that kind of naturally, um, spins along to, um, risk in general. So when you are setting up a security program or if you're looking at the security of your application or of your organization, what you're going to be doing is a threat assessment, a threat and risk assessment.
You want to know how risky things are, right? If I stand up a web server and start taking money from folks in exchange for digital downloads or, you know, I order processing, you know, what are my risks? What do I need to compensate for? What do I need to apply security controls around?
That sounds really, really formal, but basically, it means you're doing something, someone may attack that you may have errors. There's a risk there that you can offset or you can accept. So you can either do something about it. So add a security control or maybe double check your code. You know, there's something you can do to mitigate a risk or you can simply accept it and say, I don't think this is likely to happen and I'm ok if it does and I'll adjust now the way that we do these risk assessments through um a framework right now, there's a number of frameworks out there.
The most popular tends to be the NIST framework and I'll link to that down low. Um You know, in Canada, we use the Harmonized Threat risk assessment, um which is basically modified off the NIST work. Um You know, and there's a number of others out there there's ways to model threats and, and things like that and they all work relatively well, but they all have a nasty little secret as well.
Um, they're basically based on guessing now. That's a real problem, especially as a forensic scientist. Um, you know, with that scientific mindset and that we should be applying. It's computer science after all, it's not computer guessing. Um, that we really should have a lot better data to make these decisions.
And that's really the crux of the problem is that we don't have appropriate data to make proper risk assessment. So what we end up seeing in a risk assessment is that people will essentially rate it, you know, high, medium low or they'll pick a number because they feel better and they'll say this is a seven out of 10 risk.
What the hell does that mean? Uh really, you know, it's all relatively, it's, it's basically just pulling a number out of somewhere and saying, you know, I think this is, you know, seven out of 10 likely to happen or yellow likely to happen. You can't make a decision based on that.
So if you look at comparable um ways to evaluate risk, you look at the insurance industry, I know we don't like insurance. But if you look at insurance industry and let's say auto insurance to avoid the whole health care debate. But if you look at auto insurance, they have uh essentially a century's worth of data to make decisions on so that they know in um a busy city.
So let's say in Chicago that a female driver, you know, 28 to 32 in the western part of the city is, you know, this likely to have an accident or is this likely to have their car vandalized or broken into? And that's based on empirical data, it's based on quantitative data and that makes a completely different risk decision when you actually have data to back it up.
As opposed to yellow risk, they actually have a number and they say, you know, out of every 365 days, we see 32 different crimes in this area or 32 insurance claims over the course of the last 50 years. So it's really solid data to make a risk prediction when it comes to cybersecurity, we don't have the data.
Um And that's really the, the main problem. We don't have the data and there's no easy way for you to contextualize your issues. So if we tie back to yesterday's talk, um about the Olympics, um, you look at that and go, ok, how are they making risk decisions? Well, they're essentially assuming 100% guaranteed that they are going to be attacked by persistent Attackers, um, who are relentless, gotta, gotta get my t going.
Um, that are absolutely relentless and in their case, they're almost always proved correct. Um So that's good. They have and then they use year after year or every two years for the Olympics, they use the data from the last Olympics as a start around the risk assessment for the next one and saying, well, we saw 100,000 attacks, um, you know, or attempts to breach the perimeter.
Um Here's what we're going to assume for next time. So they're actually using their own data and learning year over year and that's not bad, but they're not um including the overall global data because we don't have great global data. Now. Companies like the one I work for, like all the security companies out there are trying desperately to get this data, but it's not any one thing that one people, one company can solve or any one organization can solve.
It's really a matter of collaboration across the community and realization that there's not competition in this respect is that we need better data around not just attacks but responses to attacks, preparations by organizations. We can't make the proper risk assessments right now because we simply don't have the quantitative data to back them up until we get that.
It's going to be guessing it's going to be red, yellow, green risk. Now, that's not to say you shouldn't start or shouldn't keep doing risk assessments. In fact, we need to re evaluate how we do risk assessments, not just on the data side, but there's this tendency of doing a risk assessment sort of once every year or once every X number of months, you need to continually assess risk.
It needs to be in your C ID pipeline, your continuous integration, continuous the pipeline simply because you're making changes all the time. And as soon as you make a change, you may or may not be opening yourself up to new risks. So that's really a key thing and that's kind of some of the stuff I'm diving into today and that was bubbled up from, I was looking, I do obviously based on my, my work line, I'm doing a lot of work in the cloud and I was looking at Aws fantastic.
Well, architected framework set of best practices, white papers and really just a good framework for deploying things in the Aws cloud effectively. And one of the things they obviously security and reliability is a pillar or security pillar, reliability is a pillar. Um and they were talking about this, but there's no integrated way of, of deploying um or doing continual risk assessments of evaluating the impact of the changes.
Um You know, what is the potential opening up of, of making these code changes? And I think that applies not just to cloud builds, but to everything in general is that we need to be far more um ruthless in our uh risk assessments trying to gather that data and be upfront if something is simply an assumption and A G and list with those assumptions and guesses are, um, so that, you know, and go in their eyes open and because the only thing about, uh, worse than not doing a risk assessment is having a risk assessment and assuming it's quantitative when it's really just somebody's best guess.
Um, so you need to be clear about that. Um That's what I've been thinking about, uh, this morning. That's what I'm gonna be kind of diving into, I think throughout the day. Um What are your thoughts on risk assessments? Let me know. Um hit me up in the comments uh here uh on Facebook, if you're watching live um or on youtube or wherever you find this video.
I'm always eager to engage in conversation, especially around a topic like this because I think there's a lot of um unfortunate um misconceptions and sort of um bad perceptions around what risk assessments the intention are and how owners they are. Um And I think we can do a lot better as a community um as always uh or at least as three days of now.
Um You can catch me again tomorrow morning around 930 Eastern um hitting it live on Facebook, posting the video uh afterwards uh to my other channels. You can almost almost I'm learning, hit me up on Twitter, marknca Uh most uh social networks you can give me uh at marknca.
Um Always looking to engage, always looking to um help folks out and uh learn because security doesn't have to be as hard as it is. Um, hope you have a great day builders. We'll talk to you soon.