Follow Mark on LinkedIn Follow @marknca on Twitter Follow marknca on YouTube
marknca

Mornings With Mark
no. // 0 0 0 1

Ethics And Action In Technology

Subscribe to the podcast.

Watch the episode here

Join the discussion on LinkedIn

Tweet about this episode

Full machine generated transcript follows

Good morning, everybody. How's it going? Today? I'm coming down to you a little early today for episode 71 have some scheduling challenges today. And throughout the rest of the week is I'll be in the EU talking to our son partner is talking to some customers. And so I stay tuned.

I'm going to try to hit the normal times, but I may be off by a little bit as a follow-up to episode 70 which where are we talked about ethics and Technology ethics and cybersecurity and how is a cybersecurity professional you are going to have to be prepared to have those conversations around the use of the technology that were deployed around access to information around privacy and around and I'm keeping certain things compartmentalised in an age where the push is very much the opposite.

Where is the share everything to collaborate to have no barriers and I'm an advocate of some of that is far as having no barriers in and facilitating collaboration. Obviously the tub business moves. How far from a security and privacy perspective you also need very clearly defined controls that are monitor.

What I wanted to talk in this episode about in this episode was a follow-up to that because a couple things across the radar yesterday. I'm so I'd reference table Google AI team forcing an internal discussion which resulted in the fact that they're not going to renew that contract with the military in the US but two other things that came with the light was that a Microsoft had in the Immigration and Customs Enforcement and as a customer on a jersey since pulled down that customer reference case study given the current political situation with that department in the US and they're facing some internal backlash as far as the engineering teams and internal teams with an insurer saying we human is this what we want and then the flip side or not.

In adjacent story is AWS received a letter from a number of shareholders just before the weekend about the use of the recognition video. Doggy position does face detection as well as object detection within videos and their uses in law enforcement because there's been a few case studies done with law enforcement where they've been able uses technology to set up a easy face recognition systems and they actively Market.

To law enforcement of the government as well. So there's there's that extra stats. This is interesting because the backlash is coming into form. So when we're seeing internal teams to this is not what we're okay with how many other is from shareholders now shareholders have always had pressure over companies.

That's just how that public publicly traded company system works. So don't need to dive into that. What I wanted to raise the flag on was internal teams, you need to be aware of that. This stuff is not free in the stuff by what I mean by standing by your ethics in your moral position is not free.

It's not without cost now we're seeing these examples in AI which is good because AI has his position Strength as far as a i Engineers AI researchers right now. They are so in demand that it's not even funny to crazy hot feel they're getting poached from other companies left, right and Center.

None of these folks with AI challenges or with a skillset. Sorry are going to be a challenge to find another role. So they are in an excellent position to be able to put their foot down and say wait a minute. This is not what we are okay with I am not okay with us crossing the line to building a face detection technology that could be racially biased or our technology being intimate away and that helps pay our military acquire targets things like that.

They are in a position that if the company came down and said no, this is what we're going to go ahead and do that. They could walk away same somewhat to cybersecurity. Is it cyber security is in hot demand if you have the right skillset and the right Network, you can easily find another role based on a moral or ethical reasons now that's not true all the time.

That's not true for everybody. So that raises the stakes even more if you're asked to deploy monitoring technology and you're seeing abuse with it. There is a cost or there's a potential cost if you come forward with that now, I'm not saying don't do it. I'm just highlighting the challenge you thought it was hard before having a discussion of saying hey, wait a minute.

I'm not comfortable with our technology is being used like this or I'm not comfortable if we are sharing information in the following way. Now, you need to understand there are consequences to the actions. If you are going to take them because you could be fired you could be let go or you could be ignored or you could be said no and this is this is definitely the direction a company wants to go then you are faced with a really hard decision of is that something that you want to associate yourself with an entirely personal decision.

The decision within your family within your community within yourself, I'm not going to give you guidance either way what I wanted to highlight though was the circumstances in which you're seeing the latest wave of news stories. Is a are researchers are in the Fantastic position to make a stand like this and to force change because there was such a small subset of qualified AI researchers that if the company can't just let them go right there going to lose market share.

They're going to lose Market traction if they're letting these people go and that's an enviable position as a fantastic position to facilitate change from cyber security is not quite a strong opposition, but it's close. And for now with the right folks in the right skills that you're not immune just because you're in cyber security in hangout researcher, but I think that's really important and I think that's something we need to discuss what we need to get people onto have the tool set internally to have that discussion.

I was quite fortunate that when I did my masters in information security ethics was a course, it was a mandatory course. It's about a full semester on ethics some interesting discussions in the classroom to make this thing as soon as we had to write to position your Defender positions under different structures in different schools of thought and I don't think I Cover and now if I think it has to be an absolutely core component because even if we walk away from something extreme examples are seeing right now.

Let's look at Facebook. Let's look at any social network out there any idea where the goal of that technology is essentially to follow people around and try to convince them to buy something and more often than not without their knowledge of the death of Technology. There's a lot of questions there because we're not teaching this early.

We're not getting people to ask these questions to have this discussion and we're only getting it to the point when it gets egregious and so we're already at an extreme and a precipice and then we do all we can walk back a bit as opposed to starting down that path in the first place.

No answers again today. Just wanted to highlight these these challenges in that down special case with AI in cyber security in a position of strength from employee perspective, but there are definitely consequences to this again. This is something more transparency and sunlight is absolutely critical getting out in front of it have talking about in that Spirit.

Hit me up online at Mark NCAA in the comments down below. As always by email me at Mark n. CA I'm interested in hearing your challenges you faced around ethical or moral lines out of your thoughts on this how to approach it within your teams within your company and within our community.

Let me know. I hope you are set up for a fantastic day. As I mentioned at the outset of this episode schedule be a little up and down this week due to travel to the European Union, but I hope to talk to you tomorrow and as always online. Good morning, everybody.

How's it going? Today? I'm coming down to you a little early today for episode 71 have some scheduling challenges today. And throughout the rest of the week is I'll be in the EU talking to our son partner is talking to some customers. And so I stay tuned. I'm going to try to hit the normal times, but I may be off by a little bit as a follow-up to episode 70 which where are we talked about ethics and Technology ethics and cybersecurity and how is a cybersecurity professional you are going to have to be prepared to have those conversations around the use of the technology that were deployed around access to information around privacy and around and I'm keeping certain things compartmentalised in an age where the push is very much the opposite.

Where is the share everything to collaborate to have no barriers and I'm an advocate of some of that is far as having no barriers in and facilitating collaboration. Obviously the tub business moves. How far from a security and privacy perspective you also need very clearly defined controls that are monitor.

What I wanted to talk in this episode about in this episode was a follow-up to that because a couple things across the radar yesterday. I'm so I'd reference table Google AI team forcing an internal discussion which resulted in the fact that they're not going to renew that contract with the military in the US but two other things that came with the light was that a Microsoft had in the Immigration and Customs Enforcement and as a customer on a jersey since pulled down that customer reference case study given the current political situation with that department in the US and they're facing some internal backlash as far as the engineering teams and internal teams with an insurer saying we human is this what we want and then the flip side or not.

In adjacent story is AWS received a letter from a number of shareholders just before the weekend about the use of the recognition video. Doggy position does face detection as well as object detection within videos and their uses in law enforcement because there's been a few case studies done with law enforcement where they've been able uses technology to set up a easy face recognition systems and they actively Market.

To law enforcement of the government as well. So there's there's that extra stats. This is interesting because the backlash is coming into form. So when we're seeing internal teams to this is not what we're okay with how many other is from shareholders now shareholders have always had pressure over companies.

That's just how that public publicly traded company system works. So don't need to dive into that. What I wanted to raise the flag on was internal teams, you need to be aware of that. This stuff is not free in the stuff by what I mean by standing by your ethics in your moral position is not free.

It's not without cost now we're seeing these examples in AI which is good because AI has his position Strength as far as a i Engineers AI researchers right now. They are so in demand that it's not even funny to crazy hot feel they're getting poached from other companies left, right and Center.

None of these folks with AI challenges or with a skillset. Sorry are going to be a challenge to find another role. So they are in an excellent position to be able to put their foot down and say wait a minute. This is not what we are okay with I am not okay with us crossing the line to building a face detection technology that could be racially biased or our technology being intimate away and that helps pay our military acquire targets things like that.

They are in a position that if the company came down and said no, this is what we're going to go ahead and do that. They could walk away same somewhat to cybersecurity. Is it cyber security is in hot demand if you have the right skillset and the right Network, you can easily find another role based on a moral or ethical reasons now that's not true all the time.

That's not true for everybody. So that raises the stakes even more if you're asked to deploy monitoring technology and you're seeing abuse with it. There is a cost or there's a potential cost if you come forward with that now, I'm not saying don't do it. I'm just highlighting the challenge you thought it was hard before having a discussion of saying hey, wait a minute.

I'm not comfortable with our technology is being used like this or I'm not comfortable if we are sharing information in the following way. Now, you need to understand there are consequences to the actions. If you are going to take them because you could be fired you could be let go or you could be ignored or you could be said no and this is this is definitely the direction a company wants to go then you are faced with a really hard decision of is that something that you want to associate yourself with an entirely personal decision.

The decision within your family within your community within yourself, I'm not going to give you guidance either way what I wanted to highlight though was the circumstances in which you're seeing the latest wave of news stories. Is a are researchers are in the Fantastic position to make a stand like this and to force change because there was such a small subset of qualified AI researchers that if the company can't just let them go right there going to lose market share.

They're going to lose Market traction if they're letting these people go and that's an enviable position as a fantastic position to facilitate change from cyber security is not quite a strong opposition, but it's close. And for now with the right folks in the right skills that you're not immune just because you're in cyber security in hangout researcher, but I think that's really important and I think that's something we need to discuss what we need to get people onto have the tool set internally to have that discussion.

I was quite fortunate that when I did my masters in information security ethics was a course, it was a mandatory course. It's about a full semester on ethics some interesting discussions in the classroom to make this thing as soon as we had to write to position your Defender positions under different structures in different schools of thought and I don't think I Cover and now if I think it has to be an absolutely core component because even if we walk away from something extreme examples are seeing right now.

Let's look at Facebook. Let's look at any social network out there any idea where the goal of that technology is essentially to follow people around and try to convince them to buy something and more often than not without their knowledge of the death of Technology. There's a lot of questions there because we're not teaching this early.

We're not getting people to ask these questions to have this discussion and we're only getting it to the point when it gets egregious and so we're already at an extreme and a precipice and then we do all we can walk back a bit as opposed to starting down that path in the first place.

No answers again today. Just wanted to highlight these these challenges in that down special case with AI in cyber security in a position of strength from employee perspective, but there are definitely consequences to this again. This is something more transparency and sunlight is absolutely critical getting out in front of it have talking about in that Spirit.

Hit me up online at Mark NCAA in the comments down below. As always by email me at Mark n. CA I'm interested in hearing your challenges you faced around ethical or moral lines out of your thoughts on this how to approach it within your teams within your company and within our community.

Let me know. I hope you are set up for a fantastic day. As I mentioned at the outset of this episode schedule be a little up and down this week due to travel to the European Union, but I hope to talk to you tomorrow and as always online.