Security Cloud Privacy Tech
Facebook's 10 Year Challenge

Facebook's 10 Year Challenge

Mornings With Mark no. 0156

Watch the episode on YouTube

Join the discussion on LinkedIn

Share on Twitter

Bad Robot Transcript

Morning, everybody. How you doing today in this episode of the show. I wanted to talk to you about the Facebook 10-year challenge. The reason why I wanted to discuss this issue a little more and after that it’s popped up in normal conversation. And I mean that quite literally I’m very much in or out and about around town.

I’m hearing it pop up people are discussing it. It’s coming across in mainstream articles. It’s not something else just Niche because obviously it’s a big social media phenomenon right now, but the flip side of you know, is this something more Sinister is actually a talking point and I think this is a wonderful wonderful thing.

Let’s get a couple things out of the gate real quick. What is this challenge basically real simple one side of your photo that you post as you and 2009 the other is this year in 2019. The idea is to show the difference or ideally the lack of difference between the two photos now the undercurrent know so people around the undercurrent If that maybe if this is a dataset, it’s being collected for something a little more Sinister around helping computers identify how people age now, it seems like the Genesis of this idea started with a simple tweet from a Kate O’Neill.

She’s an entrepreneur fantastic speaker. And it sometimes contributor for Wired Magazine. Was it Off the Cuffs or sarcastic sweet and it was 2009 capeside would be all over this 2019 is asking you is this really just an idea to get a tagged data set. Now the question I was asked a few times this week was why is this important aren’t computers really good at facial recognition.

The answer is yes. They are very good at facial recognition where we’re moving to now in the state of computer science is identifying whose face that is and how that face changes in situations and over time. So, who’s that who’s face that is very simple you need to identify it against another data sources of her social media is where things like Facebook are really Really good at writes really strong point for them because they know this is Mark’s profile in your pictures that Marcus said our of himself.

So, you know for those that aren’t familiar actually just struck me as I as in order to monitor here, I have facial recognition technology has been deployed widespread for a very very long time. I’m in it’s not always bad. This camera is actually automatically detecting my face to maintain autofocus right to unlock my iPad my phone with my face as well.

I’m it’s not necessarily a bad technology and we’ll get to the implications in a minute here. So this is the state of computer science. Where is that we can identify faces very very easily. I’m if you can link them to another day decide that means you can identify people have the challenges in less than optimal situation.

So right now I’m lit by key light. I’m very close to the camera. I’m the only subject here I’m looking directly in this is ideal circumstance for sort of facial recognition, but what if I’m you know in a busy crowd you can only see part of my face it it’s gets less.

Acura because it’s less distinct features that are visible things really really break down over time. This was demonstrated by Microsoft in 2015. They deployed a tool that makes you who said hey upload your photo will tell you how old you are and it had me in a somewhat laughable results, but sometimes it got it right.

I’m in this because there was a lack of data at that point. It’s not a clear data set in this is sort of the second misconception is that people assume machine learning is really really easy to do and the models are not super difficult to construct in the grand scheme of things.

It’s obvious. He a relative level of difficulty. The challenge is getting really good data. I’m so if you’re asking a computer to figure out the difference between a cat and a dog, it means samples. It needs to know what attributes make up a cat or a dog or better yet a fully curated set of photos labeled.

This is a cat versus dog this a cat versus dog sitter can learn from that. It’s the same thing with faces. Now, there’s a lot of great easily accessible datasets around facial recognition. I’m two-dimensional and three-dimensional facial recognition. There’s not a lot of great ones are on a gin.

This was sort of the gist of Kate’s humorous tweet was like a what if this is just a big experiment to get us to do the work for them. And that is not a hundred percent volume. I don’t think they started this at all. I don’t think there is an ulterior motive.

I think this is just a fun New Year’s resolution type thing like a look at me a decade ago think of what I accomplished this your kind of thing, but the premise of crowdsourcing datasets is not unheard-of. We’ve all taken, you know, the captures on websites the I am not a robot checkbox and pops up with a bunch of images used to be text that you had to go through and verify and now it’s images.

These are training data sets. This is the Google is a serviceable Google owns it now they were verifying the Google scanning efforts. So was that actual we know whether these OCR mistakes was this actually the word that yes or no, and now it’s doing the same with image recognition is that we’re tie or tagging these things for them.

So when they ask a can you show us all the pictures that contain a fire hydrant and that’s confirming Or I knew results and helping tag the data. So the whole premise of you know, the that the 10-year challenge was somehow a collection for a dataset. It’s not crazy.

It’s pretty straightforward. But if I put on my hat and try to go okay, what was he manage here? It’s one extra attribute in a already existing technology and use case, right people are already using facial-recognition. Look at my horse is already left the barn. It’s now question. Can we have discussions around the proper use? I’m acceptable use what circumstances is not acceptable at what scale things like that needs to come out into the open.

We need to have those conversations as a community is a physical geography of the local community as a larger internet Community from the agent want to come find any threats that were really there. I did find one extremely useful use case. If you have any collection of photos and you’ll see over time is really has a hard time to phone tools of detecting that this was in fact you you only really get that over decades search of your face after Tag a bunch of anal tag the most recent years because it’s got a pretty good confidence that that’s you and then when you start to go back a few years it’ll keep building the conference to keep reaching back into the past having a dataset that shows common aging patterns and some attributes and her face has evolved over time would help with tools like that.

I’m help with some special effects in Hollywood. A lot of the superhero Blockbusters lately. I’ve had DH characters this would obviously help with that. I really couldn’t detect much because unless you can go completely off the grid and then come back on at some point at you were going to be generated constant set of data.

That’s not really going to harm your profile your privacy level. I’m anything like that from this type of a challenge. So interesting things to talk about my biggest take away from this was we’re finally at a point where this is a mainstream conversation and that’s extremely exciting after someone like myself who works in security and privacy.

What do you think? What do you have you participated in it? And you see the front impact. Did I miss something here? Let me know. Hit me up online at Mark NCAA for those of you in the block. In the comments down below and as always offer podcast listeners and everybody else up by email me at Mark and.

CA and instead of in 10 years. I will see you on the next show. I hope your set up a fantastic day. Talk to you soon. Morning, everybody. How you doing today in this episode of the show. I wanted to talk to you about the Facebook 10-year challenge. The reason why I wanted to discuss this issue a little more and after that it’s popped up in normal conversation.

And I mean that quite literally I’m very much in or out and about around town. I’m hearing it pop up people are discussing it. It’s coming across in mainstream articles. It’s not something else just Niche because obviously it’s a big social media phenomenon right now, but the flip side of you know, is this something more Sinister is actually a talking point and I think this is a wonderful wonderful thing.

Let’s get a couple things out of the gate real quick. What is this challenge basically real simple one side of your photo that you post as you and 2009 the other is this year in 2019. The idea is to show the difference or ideally the lack of difference between the two photos now the undercurrent know so people around the undercurrent If that maybe if this is a dataset, it’s being collected for something a little more Sinister around helping computers identify how people age now, it seems like the Genesis of this idea started with a simple tweet from a Kate O’Neill.

She’s an entrepreneur fantastic speaker. And it sometimes contributor for Wired Magazine. Was it Off the Cuffs or sarcastic sweet and it was 2009 capeside would be all over this 2019 is asking you is this really just an idea to get a tagged data set. Now the question I was asked a few times this week was why is this important aren’t computers really good at facial recognition.

The answer is yes. They are very good at facial recognition where we’re moving to now in the state of computer science is identifying whose face that is and how that face changes in situations and over time. So, who’s that who’s face that is very simple you need to identify it against another data sources of her social media is where things like Facebook are really Really good at writes really strong point for them because they know this is Mark’s profile in your pictures that Marcus said our of himself.

So, you know for those that aren’t familiar actually just struck me as I as in order to monitor here, I have facial recognition technology has been deployed widespread for a very very long time. I’m in it’s not always bad. This camera is actually automatically detecting my face to maintain autofocus right to unlock my iPad my phone with my face as well.

I’m it’s not necessarily a bad technology and we’ll get to the implications in a minute here. So this is the state of computer science. Where is that we can identify faces very very easily. I’m if you can link them to another day decide that means you can identify people have the challenges in less than optimal situation.

So right now I’m lit by key light. I’m very close to the camera. I’m the only subject here I’m looking directly in this is ideal circumstance for sort of facial recognition, but what if I’m you know in a busy crowd you can only see part of my face it it’s gets less.

Acura because it’s less distinct features that are visible things really really break down over time. This was demonstrated by Microsoft in 2015. They deployed a tool that makes you who said hey upload your photo will tell you how old you are and it had me in a somewhat laughable results, but sometimes it got it right.

I’m in this because there was a lack of data at that point. It’s not a clear data set in this is sort of the second misconception is that people assume machine learning is really really easy to do and the models are not super difficult to construct in the grand scheme of things.

It’s obvious. He a relative level of difficulty. The challenge is getting really good data. I’m so if you’re asking a computer to figure out the difference between a cat and a dog, it means samples. It needs to know what attributes make up a cat or a dog or better yet a fully curated set of photos labeled.

This is a cat versus dog this a cat versus dog sitter can learn from that. It’s the same thing with faces. Now, there’s a lot of great easily accessible datasets around facial recognition. I’m two-dimensional and three-dimensional facial recognition. There’s not a lot of great ones are on a gin.

This was sort of the gist of Kate’s humorous tweet was like a what if this is just a big experiment to get us to do the work for them. And that is not a hundred percent volume. I don’t think they started this at all. I don’t think there is an ulterior motive.

I think this is just a fun New Year’s resolution type thing like a look at me a decade ago think of what I accomplished this your kind of thing, but the premise of crowdsourcing datasets is not unheard-of. We’ve all taken, you know, the captures on websites the I am not a robot checkbox and pops up with a bunch of images used to be text that you had to go through and verify and now it’s images.

These are training data sets. This is the Google is a serviceable Google owns it now they were verifying the Google scanning efforts. So was that actual we know whether these OCR mistakes was this actually the word that yes or no, and now it’s doing the same with image recognition is that we’re tie or tagging these things for them.

So when they ask a can you show us all the pictures that contain a fire hydrant and that’s confirming Or I knew results and helping tag the data. So the whole premise of you know, the that the 10-year challenge was somehow a collection for a dataset. It’s not crazy.

It’s pretty straightforward. But if I put on my hat and try to go okay, what was he manage here? It’s one extra attribute in a already existing technology and use case, right people are already using facial-recognition. Look at my horse is already left the barn. It’s now question. Can we have discussions around the proper use? I’m acceptable use what circumstances is not acceptable at what scale things like that needs to come out into the open.

We need to have those conversations as a community is a physical geography of the local community as a larger internet Community from the agent want to come find any threats that were really there. I did find one extremely useful use case. If you have any collection of photos and you’ll see over time is really has a hard time to phone tools of detecting that this was in fact you you only really get that over decades search of your face after Tag a bunch of anal tag the most recent years because it’s got a pretty good confidence that that’s you and then when you start to go back a few years it’ll keep building the conference to keep reaching back into the past having a dataset that shows common aging patterns and some attributes and her face has evolved over time would help with tools like that.

I’m help with some special effects in Hollywood. A lot of the superhero Blockbusters lately. I’ve had DH characters this would obviously help with that. I really couldn’t detect much because unless you can go completely off the grid and then come back on at some point at you were going to be generated constant set of data.

That’s not really going to harm your profile your privacy level. I’m anything like that from this type of a challenge. So interesting things to talk about my biggest take away from this was we’re finally at a point where this is a mainstream conversation and that’s extremely exciting after someone like myself who works in security and privacy.

What do you think? What do you have you participated in it? And you see the front impact. Did I miss something here? Let me know. Hit me up online at Mark NCAA for those of you in the block. In the comments down below and as always offer podcast listeners and everybody else up by email me at Mark and.

CA and instead of in 10 years. I will see you on the next show. I hope your set up a fantastic day. Talk to you soon.

More Content