Security Cloud Privacy Tech
A.I. Amok

A.I. Amok

Mornings With Mark no. 0047

Watch the episode on YouTube

Join the discussion on LinkedIn

Share on Twitter

Bad Robot Transcript

Is everyone doing today? I’m little ones good. I’m going to double-check in mufeed game someday. I will have confidence in the Stream. Used our confidence in the Stream today. Okay. My favorite word for one of my favorite words of all time is a muck as in AI is running amok.

It’s a word that you never use in a positive contacts because it means uncontrollable or destructive sort of behavior. And that’s what we’re seeing in the context of the upcoming announcement. Saw this week’s field from Google next and next is coming up in a few few months and some other examples have now interesting ly enough.

I’m just a little bit here. Accentuate of course, every interesting enough the first wave of Google and answers to sort of hit-or-miss. Now, I found it ironic that they called out the beer Emoji. That’s the follow up on the hamburger emoji and that they had made a mistake on and of course they’ve made a mistake on the phone in the beer to the correcting that they made a whole bunch of new announcers yesterday like updates to ml kit Android P, which is I got a whole bunch of a new machine learning techniques to help you use your phone less and then poorly made this announcement of a Google Assistant being able to make phone calls to businesses and in on your behalf autonomously to set up appointments and things like that.

Run amuck was the first turn that spring to mind and it continues to stay there until a bubble a little bit because the questions we are off Mia’s table aai and ethics and what we want to do with a bigger discussion this sort of hits on another pet peeve of mine is that I get asked quite often by journalists around the world and you know, it was this a Smart Security decision was a good decision for privacy things like that.

I’m in my response is almost always the same least. I hope it’s relatively consistent is that I don’t have a problem with any given decision as long as it’s made explicitly and this is sort of the challenge we have come with a lot of these Google announcements like they say, hey look all these great features in Android P to help reduce your addiction to the smartphone, but what they don’t highlight is the fact that they need to know you even more intimately and connect even more data to your build a better profile in order to do that.

Look at all these cool advances. We’ve made in Google photos and they are cool. Don’t get me wrong. These features are amazing. He’s a great cool new features, but look at these cool new features that we’ve made. And oh, yeah, of course. I’m going to get this better awareness of your photos.

We harvested even more in scanned everything we could to train our eyes better. Let me Google Assistant. I’m stepping very much into the into the real world interacting with other people without saying who it is that is a bit of a flag for a lot of people that was fired up on Twitter a little earlier.

There’s a concern there. Is that legitimate are we cool with a eyes having human reactions that the human knowing they’re interacting with an AI directly directly realize to augment. I’m so in the case of the wealth police that we talked about yesterday and the ai’s were scanning through the hundreds of thousands of faces to Bubble Up a few for humans to take it to the next town by taking it all the way and I can understand the Silicon Valley mindset of saying like a what a cool thing we can do for users so we can get the AI to call places in book appointments for them.

But what about the flip side of the user’s interacting like having a call and it’s this weird voice and you like I don’t know how to handle this and it turns out to be an AI That’s poorly trained or nay. I was behaving poorly as opposed to a real human.

I’m so there’s definitely some lines being crossed here. Rest of my biggest problems are the underlying take away from Google IO this week was we’re going to building better profiles about you. They’ve gotten relatively unscathed through the spotlight. That’s been a head Facebook. Recently. I’m going to get these are implicit for most like an implicit transactions on explicit and I don’t like that.

I like explicit transactions. I think that’s where we find strong privacy. That’s where we find strong security is laying things out for people and saying here’s what we’re doing. Is what you get back. Are you okay with that? And there’s also parallel discussion happening right now on on Twitter with a few folks around apple and their disclosure when you say hey, I want all the data you have on me and what they say.

I’m back in his that actually complete or not. And you know, I raise a point that Apple was sending back with a half a day and where they failed was they weren’t crystal clear. And here’s what we have here is what you have stored with us that we don’t have access to a more only have access to under a legal legal request and then you know, here’s what we Trace around you build a profile around you I am so there’s a lot of work to be done here and I think this is the tip of the iceberg but this is my big takeaway for today might be a caution for today is that when the company when companies come out with all marketing spannagel? Hey, look at all the stuff.

I want to go warning Bells. I like explicit. Hey, this is what we’re doing. This is what we are giving you in return until a positive example Atmos security vendors including the one I work for do this and when you Either opt-in or opt-out depending on how they prefer.

I’m to sharing some anonymized. What you doing out of my security information. Here’s what happened to the back and is that it gets analyzed if it’s malicious and everyone benefits from it and there’s no tracking down. There’s no tracing Gana. There’s no profile building up. It’s a perfect but at least has an explicit agreement.

Hey, here’s what you give you know by sharing some of this data that’s been cleansed or at least filter kids. When you get back in return you get better protection. So direct translation is not just you getting those protections. Everybody on Facebook makes a very understandable trade-off, but it’s not explicit implicit right by sharing all of your activity on the network.

We projector user information. But information about the users is what we sell to advertisers to the point where it could impact on community Norms in society norms, and we’re seeing that in a lot of places where there’s an election coming up here in Ontario. I’m in Canada and I’m seeing on YouTube tons of ads from that are explicitly by us and then actually a probably against Canadian campaign laws, but no information, but who’s putting them out? Who’s trying to influence to feed in the view of people are in The Province so major issues and again really a kind of retention two-tier.

But the takeaway is explicit agreements when it comes to use your privacy and security are far better than implicit in a i r is starting to creep into a lot of places where we need to talk about now before it goes everywhere so that we can make these explicit agreement and understand what we’re trading off and we’re getting back and return some bad.

There’s no bad things out there necessarily. It just needs to be out in the sunlight and transparent so that you can consciously walked in as opposed to things just being pushed on around you. So please by all means I hit me up online Mark NCAA looking forward to talk to you about this issue.

The only way we’re going to get through it and it is going to be competition for the next few years for sure discussion needs to start now. Have a great day. I hope your day is as beautiful as it is out here and decided film outside today because it’s like + 24 already.

Why wouldn’t you take I will talk to you guys soon. getting Is everyone doing today? I’m little ones good. I’m going to double-check in mufeed game someday. I will have confidence in the Stream. Used our confidence in the Stream today. Okay. My favorite word for one of my favorite words of all time is a muck as in AI is running amok.

It’s a word that you never use in a positive contacts because it means uncontrollable or destructive sort of behavior. And that’s what we’re seeing in the context of the upcoming announcement. Saw this week’s field from Google next and next is coming up in a few few months and some other examples have now interesting ly enough.

I’m just a little bit here. Accentuate of course, every interesting enough the first wave of Google and answers to sort of hit-or-miss. Now, I found it ironic that they called out the beer Emoji. That’s the follow up on the hamburger emoji and that they had made a mistake on and of course they’ve made a mistake on the phone in the beer to the correcting that they made a whole bunch of new announcers yesterday like updates to ml kit Android P, which is I got a whole bunch of a new machine learning techniques to help you use your phone less and then poorly made this announcement of a Google Assistant being able to make phone calls to businesses and in on your behalf autonomously to set up appointments and things like that.

Run amuck was the first turn that spring to mind and it continues to stay there until a bubble a little bit because the questions we are off Mia’s table aai and ethics and what we want to do with a bigger discussion this sort of hits on another pet peeve of mine is that I get asked quite often by journalists around the world and you know, it was this a Smart Security decision was a good decision for privacy things like that.

I’m in my response is almost always the same least. I hope it’s relatively consistent is that I don’t have a problem with any given decision as long as it’s made explicitly and this is sort of the challenge we have come with a lot of these Google announcements like they say, hey look all these great features in Android P to help reduce your addiction to the smartphone, but what they don’t highlight is the fact that they need to know you even more intimately and connect even more data to your build a better profile in order to do that.

Look at all these cool advances. We’ve made in Google photos and they are cool. Don’t get me wrong. These features are amazing. He’s a great cool new features, but look at these cool new features that we’ve made. And oh, yeah, of course. I’m going to get this better awareness of your photos.

We harvested even more in scanned everything we could to train our eyes better. Let me Google Assistant. I’m stepping very much into the into the real world interacting with other people without saying who it is that is a bit of a flag for a lot of people that was fired up on Twitter a little earlier.

There’s a concern there. Is that legitimate are we cool with a eyes having human reactions that the human knowing they’re interacting with an AI directly directly realize to augment. I’m so in the case of the wealth police that we talked about yesterday and the ai’s were scanning through the hundreds of thousands of faces to Bubble Up a few for humans to take it to the next town by taking it all the way and I can understand the Silicon Valley mindset of saying like a what a cool thing we can do for users so we can get the AI to call places in book appointments for them.

But what about the flip side of the user’s interacting like having a call and it’s this weird voice and you like I don’t know how to handle this and it turns out to be an AI That’s poorly trained or nay. I was behaving poorly as opposed to a real human.

I’m so there’s definitely some lines being crossed here. Rest of my biggest problems are the underlying take away from Google IO this week was we’re going to building better profiles about you. They’ve gotten relatively unscathed through the spotlight. That’s been a head Facebook. Recently. I’m going to get these are implicit for most like an implicit transactions on explicit and I don’t like that.

I like explicit transactions. I think that’s where we find strong privacy. That’s where we find strong security is laying things out for people and saying here’s what we’re doing. Is what you get back. Are you okay with that? And there’s also parallel discussion happening right now on on Twitter with a few folks around apple and their disclosure when you say hey, I want all the data you have on me and what they say.

I’m back in his that actually complete or not. And you know, I raise a point that Apple was sending back with a half a day and where they failed was they weren’t crystal clear. And here’s what we have here is what you have stored with us that we don’t have access to a more only have access to under a legal legal request and then you know, here’s what we Trace around you build a profile around you I am so there’s a lot of work to be done here and I think this is the tip of the iceberg but this is my big takeaway for today might be a caution for today is that when the company when companies come out with all marketing spannagel? Hey, look at all the stuff.

I want to go warning Bells. I like explicit. Hey, this is what we’re doing. This is what we are giving you in return until a positive example Atmos security vendors including the one I work for do this and when you Either opt-in or opt-out depending on how they prefer.

I’m to sharing some anonymized. What you doing out of my security information. Here’s what happened to the back and is that it gets analyzed if it’s malicious and everyone benefits from it and there’s no tracking down. There’s no tracing Gana. There’s no profile building up. It’s a perfect but at least has an explicit agreement.

Hey, here’s what you give you know by sharing some of this data that’s been cleansed or at least filter kids. When you get back in return you get better protection. So direct translation is not just you getting those protections. Everybody on Facebook makes a very understandable trade-off, but it’s not explicit implicit right by sharing all of your activity on the network.

We projector user information. But information about the users is what we sell to advertisers to the point where it could impact on community Norms in society norms, and we’re seeing that in a lot of places where there’s an election coming up here in Ontario. I’m in Canada and I’m seeing on YouTube tons of ads from that are explicitly by us and then actually a probably against Canadian campaign laws, but no information, but who’s putting them out? Who’s trying to influence to feed in the view of people are in The Province so major issues and again really a kind of retention two-tier.

But the takeaway is explicit agreements when it comes to use your privacy and security are far better than implicit in a i r is starting to creep into a lot of places where we need to talk about now before it goes everywhere so that we can make these explicit agreement and understand what we’re trading off and we’re getting back and return some bad.

There’s no bad things out there necessarily. It just needs to be out in the sunlight and transparent so that you can consciously walked in as opposed to things just being pushed on around you. So please by all means I hit me up online Mark NCAA looking forward to talk to you about this issue.

The only way we’re going to get through it and it is going to be competition for the next few years for sure discussion needs to start now. Have a great day. I hope your day is as beautiful as it is out here and decided film outside today because it’s like + 24 already.

Why wouldn’t you take I will talk to you guys soon. getting

More Content

Related Content