Archive 7 min read

馃 A.I. In Context

The popular perception of A.I. does not line up with its current capabilities. A look at what's real and what's not.

馃 A.I. In Context

Watch this episode on YouTube.

Reasonably Accurate 馃馃 Transcript

Good morning, everybody hope you're having a fantastic start to your day today. I wanted to talk to you about um artificial intelligence or A I within context. Now, unfortunately, with any technically complicated subject, um there starts to be sort of this um smoothing, I think is a good word of how it's sort of explained.

Um when it first comes out, you know, everybody gets in into the gritty into the specifics deep into the technicalities and sort of explains it and as it starts to go to broader and broader audiences, the um explanation gets a higher and higher level.

So by the time it's out to mass market um to the, you know, greater community, um there's a lot of fidelity, a lot of that signal is lost. So people get a misunderstanding of what this technology is capable of. We see this all the time with scientific research, it's extremely frustrating when you read these scientific papers, they're great at taking these steps forward.

Um It's very rare that you'll see this massive massive breakthrough. They do happen. Unfortunately, not as often as we want, but it's really hard to make that into an interesting story and people jump to conclusions or they, they put a whole bunch of assumptions around it as opposed to just reading the science.

And we've seen this in the technology field with A I, people expect Hollywood level A I when in the reality is very, very different. So I wanted to highlight a couple use cases, um some good, some bad um to put a little bit of context around it and as always will dive into the security and privacy aspects because that's what the show is all about it, sort of understanding technology through security and privacy lens.

So the example I wanted to call to your attention was based on an article the New York Times did over the weekend that highlighted a company called um Mitra. I think that's how you pronounce it. Matra, they're a clothing store or a clothing company and they are using A I based on demand based on a number of factors within their business to produce potential designs for um clothing.

So prints for t-shirts and then another one A I will another model will look and see if those designs are too similar um or just enough dissimilarities between existing products that they sell. And it turns out one of their top selling products has actually been generated this way through these algorithms.

I thought that was really, really cool. And the article goes on to cite some other examples of um machine learning of, of A I techniques used within the fashion industry. And one of the most prominent and sort of the central thesis of this article was around the role of a buyer.

So a buyer um within a clothing company, within a fashion, uh sort of industry will predict demand. They will think, you know, what's hot, what's not um and try to sell or buy those particular items depending on where they are the chain.

So you think, um you know, the buyer for uh sacks or for Nordstrom, they're going to say, you know what's really in um this year uh popped collars, right? Pop collars are in. So we're gonna, we're gonna order these two shirts because we think that's going to be the new hot item.

I know I have zero fashion sense. I just figured the goofiest thing I could, could come up with. Um But what these uh these companies and in this example, New York Times sites, this company called Stitch Fix, what they're using is modeling in order to predict demand.

And I think that's a fantastic use case of these kinds of technologies of A I techniques because it's one where you can see significant wins. It's not um it's not pushing it too far. This is sort of right in the wheelhouse of where these techniques are extremely useful because there's a ton of data about what people have bought, where they bought it.

Um What the volumes were as far as you know, blue shirts versus green shirts, long shirts versus shorter shirts. Like there's a ton of data here where the system can use because it's properly classified to build out demand appropriately. And then what these companies have done is the buyer now has this data to evaluate and make a call based on their reading of the data based on their understanding of some factors that are really difficult to put into these models.

And I think that's the key. It's this combo of human as well as algorithm. And I'm going to give you a negative example. Now we all know because I've talked about it a bunch here. We've seen it time and time again online and in the media, Facebook is having a significant trouble spotting fake news, helping identify political ads and their sources.

And so they promised to beef up their staff that takes time to train people to pay people to find them all that kind of thing. So in the meantime, they're applying algorithms and uh these algorithms are based on some machine learning techniques and they're not doing great.

There were examples cited uh over uh especially leading up to the fourth of July where Walmart had a sale on Bush's beans, um very popular baked beans brand in the US. And that got flagged because they thought I was talking about former political candidate Hillary Clinton and um uh sorry, not Hillary Clinton, but the Bush family, right?

So um the the political family, former presidents. Um And so they flagged that as, as political content. Um Another example um was uh somebody had actually been pushing status updates that were the direct um transcription of the Declaration of Independence.

So in honor of the fourth, they were actually pushing out a copy of the declaration and then got flagged as political content. Well, yes, political content, but it's also a historical document. Any human would have seen that and just gone. No, no, it's fine.

Don't worry about it. But the algorithm flagged it because it was basically dumb even though we are using highly advanced intelligent techniques. Um there's still a lot to be said. So it's just that's a bad example just letting the algorithm try to catch stuff on its own the fashion algorithms of, you know, using the algorithm to sort through the data and then have a human do the final pass.

That's where we see a ton of success. And that translates directly into the security world. We're seeing more and more that teams are applying machine learning and other A I techniques to the volume of security data that's there. But you can't just rely on that.

You need to actually have an analyst go over those results as well. So the idea would be get the computer and the A I algorithm to do a lot of the sorting, a lot of the heavy lifting and let the human, the final pass to provide and apply their judgment on that massaged and triaged data.

And that sort of 12 punch is really where A I thrives. We are not yet at the point where you can have A I is doing their own thing on their own with consistent results. And when we do, as we saw last week from my CBC column on Google Duplex, it's extremely narrow and very, very focused.

So Google Duplex can have a audio conversation uh to book a restaurant to ask about holiday hours or to book uh a hair appointment. That's it, it's not a generic solution. And I think that gives you a little more context around where we're at with A I, we're seeing some extremely exciting stuff, but we're not at the point where the terminator is gonna come back from the future um to off John Connor and his mother, right?

They are not independent acting yet. We have a long, long way to go in a number of research fields before we get there. But where we are seeing these techniques successfully applied, like with stitch physics as an example, that's a huge boost, that's a competitive edge for them.

And that's a really good uh application of that. Another interesting one. I saw a talk years ago from Professor Eberhart who has spearheaded a technique called particle swarm optimization under the A a umbrella. He had a client who was talking about fedex and they were organizing how to load planes.

That was a great problem for the computer because it came up with solutions that the humans had never thought about because it was a constrained space. There was a plane parked, it has a certain number of capacity. There's a bunch of packages in the warehouse and there's X number of forklifts that can move packages back and forth at the best routing, timing, all that kind of stuff.

Great problem for A I human just looks at it and runs it through sort of a sniff test and off. You go. So we're not at the point where A I are going to take over, they're not replacing jobs at this point right now.

A I is an augmentation technology and I think that's a wonderful place for it to be. We can see some significant advantages. We have a lot of one way and when it comes to solving those kind of problems. So that's the topic for today.

Uh What do you think? How are you experimenting with uh machine learning with other branches of A? I hit me up online at marknca in the comments down below or by email as always at, hope you're set up for a fantastic day.

Look forward to talking to you online and we'll see you on the show tomorrow.

Read next