Watch this episode on YouTube.
Reasonably Accurate 馃馃 Transcript
There we go. All right. So, um Mornings with Mark, uh I wanted to talk about A I because yesterday was Microsoft's build conference. Um A I came up a lot today is Google's IO um that's going to uh tackle on um A I A significant amount and in fact, I was just reading an article, I'm just pulling up here uh in the Guardian from the weekend where um the Welsh Police in South Wales in the Champions League final in 2017 had deployed facial recognition software that had a 92% false positive rate, 92% A I is far from flawless.
Um Now that case with the Welsh police was really interesting. They didn't make any false arrest based on that. Um They were using the A I to filter it up for further human investigation, which is, tends to be the best case for A I to have the A I sort out all the data and then push it up to humans for further analysis. Um But this ties to a panel I attended at South by Southwest um on the ethics of A I and a lot of the announcements you're gonna hear this week are just like, yay.
A I it's the best thing ever. Um And I think it is a huge um help to the problem set of sifting through the massive amounts of data that we're making. Um The challenge comes around when we start to make impactful decisions based on it. So I had spoken about this a number of episodes back. Um Palantir uh had fired up and I don't use a bad word.
Uh But they had fired up a system um that they were using to help um police departments um make predictive uh sort of interventions to prevent crime from happening. And of course, you know, lo and behold, it turned out to be insanely biased um towards uh the black community um and other um minorities um and diversity communities. Uh And the problem is, of course, is the d the data that was um going into, it was insanely biased because the data and the policing in those neighborhoods has historically been biased.
So, you know, everyone say, oh the A I is biased. Well, no, the data you shoved into it was bias. Um And there was a problem on the outside, right? And these are the kinds of issues and I'm not saying that, you know, there's a simple fix here, but this was the thrust of the panel with some of the luminaries and I'll, I'll link to the panel description download because you're, you're, well, off um going through those researchers notes, um you know, and seeing their papers on ethics and A I or following um Fei Fei Le from um Google or any Luminary, any number of luminaries in this field where they're looking at the impact of the decisions of the things that they're making.
And that's something that we fail to do quite often um is to look at the impacts of some of these decisions that we make. Um So the case of the, the Welsh police, you know, they ensure that they weren't making direct arrest based on it, but still 92% of the people flagged were no, like we're just fans there for the football match.
Um That's an issue, right? Um Uber diving into the um uh details of the uh fatal crash, right? And they just hired somebody from the NTS be um saying, well, you know, the, the lack of a decision to swerve, uh you know, I guess where I'm kind of getting at here is that there's a lot of cool stuff going on in A I, but there's a much bigger question that we need to continue to ask ourselves and to regularly ask ourselves.
And that's um are we programming in the right types of decisions for um or the right biases or the right preferences is for the community? Um And it's funny because this is one of those episodes where everything just kind of jumbles and I'm sorry that it's all off the whack. But um a couple of weeks ago I was uh I was kindly invited to uh Canada beyond 150 event to review some of the young public servants works on the future of ball book service.
And one of the things that was a really interesting and fascinating discussion was sort of the transparency behind A I decision making. And if they were going to use A is to help uh make a policy decisions based on better data and better information was how could you justify that decision if there was a mistake or even if not, how could you trace that back in the name of open and transparent government?
How you could you trace that decision back? And I think we're facing that same issue um in a lot of places um where A is are making more decisions for us, machine learning is making more decisions, but it's really difficult for us to um decide or to trace back those decisions to see if they're correct or not. XK CD had a phenomenal cartoon the other day where um you know, in G, they were saying, I don't trust my machine learning plots when it's much easier to draw a new cons or find a new constellation than basically to find the mean or the average or the, the confidence line.
Um Now that's a really apt statement and I think that's um something we're all gonna have to dive into so pretty random discussion this morning. Um Just something that's on my mind is I'm gonna watch uh day two of MS Build and day one of Google IO. Um And I expect it to be tons of new A I announcements and it's exciting.
There's lots of cool stuff going on. Um You know, I was thinking of an A I pipeline even for this show. Um just simply to pull the audio out and then do some analysis, do some sentiment analysis, figure out if I'm positive or negative on a day. Um What I'm really talking about though, I'm rambling for a while. Um That kind of stuff is extremely valuable but as we push A I into more impactful decision areas um for the real world and for society, we need to be careful, not stop.
I'm not caution, I'm not saying we need to stop or we need to do something to prevent this stuff. I think we just need to be aware of it and that's the biggest challenge right now is a lack of awareness of the impacts of these decisions just because we can build something doesn't mean we should be. Um anyway, uh it's all about discussion, it's all about sunlight.
Hit me up here online at marknca in the comments below. If you're seeing this after the fact on any of the other networks, I hope you are set up to have a fantastic Tuesday I look forward to talking to you and engaging soon. Take care. I will fix the microphone problem even though this is one of my favorite mobile mics um at some point so that you hear us better.
And tomorrow of course, I'll be lit better. Have a good one. Take care.