Archive 8 min read

Zoom.us & The Real Cybersecurity Problem

Zoom.us had a pretty egregious security issue this week. Their response was poor despite the best efforts for responsible disclosure by the security research who discovered the issue. While this issue has dominated tech headlines, the real issue is much more significant and commonplace.

Zoom.us & The Real Cybersecurity Problem

Watch this episode on YouTube.

Reasonably Accurate 馃馃 Transcript

Morning everybody. How are you doing today? On this episode of the show? We're gonna talk about Zoom and their recent security troubles. I know what you're thinking. You're like enough already. I have had enough of this Zoom thing because it has been plastered all over the technology news this week and I don't blame you for being fed up.

But I think we're gonna do a quick recap of the issue just in case you haven't heard about it and we're gonna dive into what it really means because I think there's an underlying lesson for all of us to learn here. Now, if you've been hiding under a rock or more appropriately at the pool or at the beach, taking some much needed recharge time, good for you.

But here's a quick recap of what went on. A researcher published uh after a 90 day disclosure period, a vulnerability in the Zoom meetings client for Mac. Now, Zoom is one of the leading video conferencing uh software solutions. They recently IP O and they've got really solid financials, they've got about 50,000 paying customers, um which is a ton of users not to mention the free version has a ton of users as well.

Essentially, it's a chat tool. It's uh interactive video uh conferencing tool. It's a webinar tool. Um It's, you know, better than the alternatives for the most part and it's the new player in this field. So a lot of companies are jumping on board.

OK. Pretty straightforward security vulnerability they had and they handled this situation poorly and I'll link to a bunch of stuff so that you can read that out on your own because I don't want to rehash it here. But essentially the security vulnerability was the following.

On Macs, they had a local web server running so on local hosts on a specific port, zoom was actually running a web server and this web server could take commands and like launch a meeting, reinstall the client, that kind of thing. It was all designed to get around a security warning in safari.

Now anytime you try to launch an external application in Safari, safari, pops up with a dialog box for the user to provide consent, it says this web page is trying to launch Zoom or Spotify or whatever the case may be. Do you want to allow it or deny it?

Right. And that's a great security prompt. It makes sure that the browser doesn't unwittingly cross into the operating system space and onto your local system. So it's a very good prompt for the user because you're going wait a minute. Is this what I want from a web browser or do I want it interacting with my os in the following manner?

Now zoom found that was a friction point for users and as a usability feature, they wanted to eliminate this dialogue that was put in place to protect users by the manufacturer of the browser and the operating system. We'll dive into that in the second half.

So what they did was they had this local web browser running because now when a web page was trying to open up a meeting, they weren't trying to launch an application, they were calling out to another website and there was some interesting work done in the background to uh handle that cross site referencing.

Um But basically it works so that when you clicked on a link, it launched the web page and popped up in the application itself without providing that additional prompt. Remember, this is all to avoid one click. Now it turned out that even if you uninstall the application, this web server continued to run, which meant that if you didn't have the client and you got a meeting request, it would actually download it and background and load of the client again all in the name of usability.

So you can see there's some issues here. There was an uproar after initially misstepped a few times on this, the way they handled the initial disclosure from the research or the way they handled the public disclosure, they finally backtracked and actually fixed the issue.

So step one update your Mac client for Zoom and you don't have to worry about this anymore. But what I found really interesting was in the discussion around it was around the technicalities of it. Is this a good move? Was this smart, this that or the other thing?

And let's just say from a security perspective, this was not a great pattern, but it's not unique by any stretch of the imagination. A lot of your desktop applications are actually running web servers locally. And that's a result of web uh microservices development.

That's a result of trying to get around security issues with the browser. That's a result of a number of things. But for me, the biggest result or the biggest reason why this is happening is it's a clear indication that we in the security community continue to fail to work with application teams because when the zoom vulnerability came out, I pictured the meeting in my head and I guarantee you this is almost word for word what was happening.

Hey, we got user reports saying that when they try to open up a meeting, they're getting prompted by Safari of whether or not they want to actually open Zoom before they even ever get into our products. And users are confused for it uh by it, what do we do?

They said, well, we can try to eliminate that. What's the, what's the root cause? So we do a little technical digging. You dive in and say, well, wait a minute. It's because it's trying to launch an external application. If we ran our own web server locally, then we wouldn't be launching an application.

We'd simply be calling another web page. We can eliminate that and make a much smoother experience for our customers. Meaning the customers can click the link and just push right on through everyone at the table would be like, yeah, yeah. Yeah, let's do that.

That makes makes sense because that's a better customer experience for the intended intended action, which is launching a meeting from their calendar or from their email or wherever they actually got that invite from right going through the browser to launch it out.

It is a pretty central and common usability pattern. So if somebody tweeted you a link privately through ad M they emailed you. If they texted you, whatever the case may be, you'd be able to get through there. Pretty straightforward. Team agrees. Let's do it.

They push it out to production and hey, wait a minute while we've got this web server here, wouldn't it be great if you didn't have the client currently installed, we could actually push that down. Yeah, and we could do a whole bunch of other actions with this local browser as well.

It'll be great. What didn't come up in that conversation? And I guarantee this happened or didn't happen? Um Was that nobody popped up and said, wait a minute. Why is that dialogue there in the first place? What are the security challenges around this approach or even if they did?

And it's extremely rare that they would have had this conversation. The security concerns get overridden because of the usability concerns because of the friction in the user experience. And I get it from the product management perspective. You want to reduce as many friction points as possible.

You want this smooth user first experience, but security is part of that because you see if I send you to a web page that loads an image uh from local host in this port, I can start to manipulate your Zoom client, right? And Zoom is pretty active about uh telling people who uses Zoom, you go to their page and say, hey, here are all the customers that love Zoom.

Um And it's pretty common to see these kind of links. You can Google for links that have uh Zoom in the URL. So you can see what companies are using it. And here all of a sudden you've got a pretty easy scenario to start manipulating people's access.

Now. Is this the end of the world? Absolutely not. In the worst case scenario, your Zoom client would pop up, your video would be on and the audio would be on and you might not notice it right away, but you would probably find it out pretty quick because your light is still on, on your webcam and the application is now running and active.

So if you don't keep zoom open all day, you'd be like, why is it open? If you do keep it open all day, you might not notice for a while. So there could be the intrusion of I'm looking into your workspace and that's serious.

That's significant. That's a security and a privacy violation. But the odds of that happening, the odds of that being successful are pretty low. So there's no reason to chicken little this. But there is this concern that security was not part of this process.

Because if there's a security advocate in the room, the very fact that you're breaking a fundamental security control in the browser should have been flag one. The fact that you're now running a web server should have been flag two. Hey, wait a minute, why are we running an unauthenticated web browser that has to do a whole bunch of back flips to make this stuff work?

All of these things, these two things should have been red flags. Let alone the fact that the technology side of the developer should have been, wait a minute. Do we really want to run a web server? That's a new set of operational overhead that we need to account for.

We need to maintain that we need to patch it. We need to do all these other things. And odds are if they made this choice in the first place, they probably won patching it or working on this server that aggressively so there could be other security issues down the line.

But for me, while the Zoom vulnerability gained a huge amount of popularity and visibility, um the bigger issue here is that lack of security discussion on the development team because from a security and privacy perspective, it's a pretty clear. No, but I based on my experience, the amount of gray I got in my hair and my beard, you can probably tell I've been around the block a few times.

That discussion rarely ever happens. And that is the real problem here and it's coming up time and time again, we cannot continue to have security off on the side being a special thing that gets applied, maybe at the beginning, maybe at the end and ignored throughout that leads to really bad security outcomes.

Even if we're saying, you know, hey, we take security seriously. We talked to them twice during the time of, you know, this six month project. That's not serious security, that's not effective security. But sadly, that is the state of security in most organizations.

What do you think? Let me know, hook me up online at Mark NC A in the comments down below. And as always by email me at Mark and dot C A, look forward to talking to you about this issue, hopefully laying the Zoom one to rest and talking about the bigger issue.

Uh, we'll see you online and see you in the next episode of the show.

Read next