Watch this episode on YouTube.
Reasonably Accurate 馃馃 Transcript
Morning everybody. How you doing today? In this episode of the show, we're gonna take a look at a really positive response to a software vulnerability. Now, this week, Mulesoft, which is actually now owned by Salesforce had a pretty bad software vulnerability and that happens now just to put this in context.
Mulesoft is a middleware company. Essentially, they sell a service which helps you glue different systems together, whether those are legacy on premise or S A space systems and they help do data transformation, um connect events, things like that, which you can see why it would fit really well in the sales force platform and that's why sales force acquired them a couple of years back.
Um So this is a pretty important piece of software in most enterprises that have it deployed and they can deploy it as an on premise version. Now, if you are well into the world of SAS, you may forget how much of a pain and then you know what on premise software can be.
If there's a critical software vulnerability, you need to get customers aware of it, they need to then download it and perform their own upgrades. As opposed to in a SAS BASED system where uh you are running the service and you can simply upgrade things behind the scenes and customers never need to know uh the steps to resolve it because you've resolved it for them and you can just simply make them aware of any potential issues.
So, uh Mulesoft has a bunch of these on premise customers and they are in a critical space in a lot of these software stacks. So when they had a software vulnerability, they took what is unfortunately an extraordinary step. And I say, unfortunately, because I think this should probably be standard practice for serious enough vulnerabilities.
Now, now, uh Cataline uh SIMU for writing for ZD net had this story covered and he had a great uh way of portraying it and I'll, I'll link to that article so that you can read it yourself. Um But essentially he was saying that they were actually picking up the phone and reaching out to these customers, they sent an email to them um to set up calls and then they just started dialing and setting up meetings with the customers to help explain the context of the issue and the steps that they needed to take.
Um Now he also actually got on a call and was immediately connected with the Chief technical and Chief Trust Officer for Mule. So he had two of the senior most people from the company on the call with him explaining the issue. Now, I don't know if that's what every customer got or simply um Mr Sim Pan who got it because he was a journalist.
But the result is the, is the same, this company here realized that there was an issue that needed to be explained and was important enough that people needed to address. So instead of just putting out an email or putting it up on the website or updating a dashboard, they actually reached out to the customers and said, hey, here is the problem.
Here is the steps to make mitigate this problem. And here's the context in which this problem occurred. And I think that is a great example. I think that should be done more often. But one of the things that I wanted to really call up because it's not always practical to necessarily call all of your customers.
Um But one of the things that this approach really provides that I think is missing in the vast majority of it is context. Most of the time we get these patches issued or vulnerabilities announced and we have a CV SS score. And that gives you a relatively I uh you know, reasonable idea of the severity of the potential impacts.
You can get some of the details and you can start to grasp the issue. But the challenge is is applying it to your own situation is actually putting it in perspective, how likely is it to be exploited how easy is it to be exploited? The numbers don't really tell you the story.
And that's, I think the number one challenge we have in patching because most organizations don't have a smooth patch process. I know this is the number one issue in security and has been forever. Um But the reality is most organizations can't patch smoothly or quickly.
I saw a horrifying stat um the other day and I, I'll put the source in the comments or as an overlay here because I can't remember off the top of my head. But essentially, it was uh after companies start patching within 90 days, they've only covered just over 40% of the intended systems.
Now, hopefully that's prioritized by risk. But the reality is even three months after you've started patching, you haven't got full coverage or even half coverage yet. Um And that's pretty typical. It's a, it's a really, it's a big people problem. It's not so much a technology problem.
Um And it's something that is a weak spot in security. But the because of this, because of this challenge, patching, um, communication is critical because you need to know when it's time to pull the alarm when it's time to pull the brakes and say, wait, we need to deploy this patch immediately.
The risk is high enough and I thought this approach from Mulesoft was commendable. Um It was uh really tackled that one weak spot of most disclosures for vulnerabilities where we just kind of put them out there and say there you go. I disclosed it.
I gave you the patch, I gave you the example. Um You figure it out. Uh context is missing context is key. What do you think hit me up online at market NC A in the comments down below? And as always by email me at mark dot C A, um how do you handle patch management?
How do you evaluate, you know, sort of better question than patch management? How do you evaluate which patches are actually important enough to pull the trigger to, uh, raise the alarm bells to get them pushed out there really, really quickly. That's a great question.
Let's talk about that online. Hope you set up for a fantastic day and I'll see you on the next episode of the show.