Security Cloud Privacy Tech
AI's Security & Privacy Impact

AI's Security & Privacy Impact

Mornings With Mark no. 0046

Watch the episode on YouTube

Join the discussion on LinkedIn

Share on Twitter

Bad Robot Transcript

Mornings with Mark want to talk about because yesterday was Microsoft build conference and I came up a lot. Today is Google’s IO that’s going to tackle on a significant amount. And in fact, I was just reading an article. I’m just pulling up here in the guardian from the weekend where the wells police in South Wales in a Champions League final in 2017 in deployed facial recognition software that had a 92% false positive rate.

Now that case with the Welsh police was really interesting. They didn’t make any false arrest based on that. They were using the AI to filter up for further human investigation, which is tends to be the best case for you. I have the AI sorted all the data and then push up the humans were further analysis, but this ties to a panel.

I attended at South by Southwest. I’m on the ethics of a i and a lot of emails that you can hear this week or are just like me. And I think it is a huge help to the problem set of sifting through the massive amounts of data that were making the challenge comes around when we start to make impactful decisions based on it.

So I had spoken about this number of episodes back palantir. I had fired up in an hour at work, but they had Fired Up A system that they were using to help Police Department’s make predictive interventions to prevent crying from happening. And of course, you know, lo and behold it turned out to be insanely biased towards the black community and other minorities and diversity communities and the promise of courses did the data that was going into it was insanely biased because the data and the policing in those are neighborhoods historically been biased so, you know, there was a problem on the outside.

Right, and these are the times issues. I’m not saying that you know is a simple fix here but this is the thrust of the panel with some of the luminaries and a link to the panel description download because you’re well off going through those researchers notes how you know and seeing their papers on ethics and I are following.

I’m 50 Lee from Google or any luminor any number of luminaries in this field where they’re looking at the impact of the decisions of the things that they’re making it something that we feel to do quite often is to look at the impacts of some of these decisions that we make so the case of the with the wells police, you know, they ensure that they were making direct arrest based on it, but still 92% of the people flag.

We’re now like we’re just fans there for the football match that’s an issue right over diving into the details of the fatal crash, right? They just hired somebody from the NTSB saying wall, you know the decision to Swerve, you know, I guess where I’m kind of getting at here is Is a lot of cool stuff going on in AI but there’s a much bigger question that we need to continue to ask ourselves into regularly ask ourselves.

And that’s are we programming in the right types of decisions for him with a right biases with a right preferences for the community and it’s funny cuz this is one of those episodes where everything just kind of jumbled and I’m sorry that it’s all off the wife. I have a couple weeks ago.

I was I was kindly invited to a Canada Beyond 150 event to review some of the young Public Service works on the future of public service and one of the things that was in really interesting passing discussion was sort of the transparency behind AI decision-making if they were going to use a eyes to help make a policy decisions based on better data and better information was how can you justify that decision if there was a mistake or even if not, how can you trace that vacuum the name of open and transparent government how you could you trace that decision back? And I think we’re facing that same issue.

You are in a lot of places where a eyes are making moral decisions for us machine learning is making more decisions, but it’s really difficult for us to decide or to trace back those decisions to see if they’re correct or not XKCD have phenomenal cartoon the other day where you don’t ingest they were saying I don’t trust that my machine learning plots when it’s much easier to draw a new condo find you consolation then basically to find the mean or the average or they use the conference line.

I don’t know that’s a really app statement and I think that’s something we’re all going to have to dive into so pretty random discussion this morning. I’m just something that’s on my mind is I’m going to watch that day two of us Bill and day 1 of Google IO.

I am and I expected to be tons. Anyway, I know it’s within it’s exciting with lots of cool stuff going on and I was thinking really hard pipe lining for this show. I’m just simply to pull the audio out and then do some analysis do some sentiment analysis figure out if I’m positive or negative on a day.

I’m what I’m really talking about even though I’m rambling for a while. That’s extremely valuable. But as we push into more impactful decision areas and for the real world, and for society, we need to be careful. Stop. Kasha. I’m not saying we need to stop what we need to do something time to rent this stuff like we just need to be aware of it.

And that’s the biggest challenge right now is a lack of awareness of the impacts of these decisions just because we can build something doesn’t mean we should be anyway. It’s all about discussion is all about sunlight. Hit me up here online at Mark NCAA in the comments below if you’re seeing this after-the-fact on any of the other networks.

I hope you are set up to have a fantastic Tuesday. I look forward to talking to you and engaging soon. Take care. I will fix the microphone problem knows one of my favorite mobile nights at some point so that you’re here is better and tomorrow course. I’ll be lit better.

Have a good one to take care. Mornings with Mark want to talk about because yesterday was Microsoft build conference and I came up a lot. Today is Google’s IO that’s going to tackle on a significant amount. And in fact, I was just reading an article. I’m just pulling up here in the guardian from the weekend where the wells police in South Wales in a Champions League final in 2017 in deployed facial recognition software that had a 92% false positive rate.

Now that case with the Welsh police was really interesting. They didn’t make any false arrest based on that. They were using the AI to filter up for further human investigation, which is tends to be the best case for you. I have the AI sorted all the data and then push up the humans were further analysis, but this ties to a panel.

I attended at South by Southwest. I’m on the ethics of a i and a lot of emails that you can hear this week or are just like me. And I think it is a huge help to the problem set of sifting through the massive amounts of data that were making the challenge comes around when we start to make impactful decisions based on it.

So I had spoken about this number of episodes back palantir. I had fired up in an hour at work, but they had Fired Up A system that they were using to help Police Department’s make predictive interventions to prevent crying from happening. And of course, you know, lo and behold it turned out to be insanely biased towards the black community and other minorities and diversity communities and the promise of courses did the data that was going into it was insanely biased because the data and the policing in those are neighborhoods historically been biased so, you know, there was a problem on the outside.

Right, and these are the times issues. I’m not saying that you know is a simple fix here but this is the thrust of the panel with some of the luminaries and a link to the panel description download because you’re well off going through those researchers notes how you know and seeing their papers on ethics and I are following.

I’m 50 Lee from Google or any luminor any number of luminaries in this field where they’re looking at the impact of the decisions of the things that they’re making it something that we feel to do quite often is to look at the impacts of some of these decisions that we make so the case of the with the wells police, you know, they ensure that they were making direct arrest based on it, but still 92% of the people flag.

We’re now like we’re just fans there for the football match that’s an issue right over diving into the details of the fatal crash, right? They just hired somebody from the NTSB saying wall, you know the decision to Swerve, you know, I guess where I’m kind of getting at here is Is a lot of cool stuff going on in AI but there’s a much bigger question that we need to continue to ask ourselves into regularly ask ourselves.

And that’s are we programming in the right types of decisions for him with a right biases with a right preferences for the community and it’s funny cuz this is one of those episodes where everything just kind of jumbled and I’m sorry that it’s all off the wife. I have a couple weeks ago.

I was I was kindly invited to a Canada Beyond 150 event to review some of the young Public Service works on the future of public service and one of the things that was in really interesting passing discussion was sort of the transparency behind AI decision-making if they were going to use a eyes to help make a policy decisions based on better data and better information was how can you justify that decision if there was a mistake or even if not, how can you trace that vacuum the name of open and transparent government how you could you trace that decision back? And I think we’re facing that same issue.

You are in a lot of places where a eyes are making moral decisions for us machine learning is making more decisions, but it’s really difficult for us to decide or to trace back those decisions to see if they’re correct or not XKCD have phenomenal cartoon the other day where you don’t ingest they were saying I don’t trust that my machine learning plots when it’s much easier to draw a new condo find you consolation then basically to find the mean or the average or they use the conference line.

I don’t know that’s a really app statement and I think that’s something we’re all going to have to dive into so pretty random discussion this morning. I’m just something that’s on my mind is I’m going to watch that day two of us Bill and day 1 of Google IO.

I am and I expected to be tons. Anyway, I know it’s within it’s exciting with lots of cool stuff going on and I was thinking really hard pipe lining for this show. I’m just simply to pull the audio out and then do some analysis do some sentiment analysis figure out if I’m positive or negative on a day.

I’m what I’m really talking about even though I’m rambling for a while. That’s extremely valuable. But as we push into more impactful decision areas and for the real world, and for society, we need to be careful. Stop. Kasha. I’m not saying we need to stop what we need to do something time to rent this stuff like we just need to be aware of it.

And that’s the biggest challenge right now is a lack of awareness of the impacts of these decisions just because we can build something doesn’t mean we should be anyway. It’s all about discussion is all about sunlight. Hit me up here online at Mark NCAA in the comments below if you’re seeing this after-the-fact on any of the other networks.

I hope you are set up to have a fantastic Tuesday. I look forward to talking to you and engaging soon. Take care. I will fix the microphone problem knows one of my favorite mobile nights at some point so that you’re here is better and tomorrow course. I’ll be lit better.

Have a good one to take care.

More Content