Security Cloud Privacy Tech
Road to re:Invent - Amazon S3

Road to re:Invent - Amazon S3

Amazon S3 was one of the first AWS services and it continues to be the cornerstone of the AWS Cloud today. This stream covers the basics of the service and how to ensure that your data is only accessed in the ways that you expect.

Bad Robot Transcript

Good morning everybody. How are you doing today? I know it’s Monday always tricky to kind of get motivated to do something on a Monday you’re dealing with the massive amount of stuff in your inbox and you’re trying to get a handle on your week. And I am at the AWS Summit in Toronto on Thursday, which is why I moved things rather than doing Tuesday Thursday stream.

I moved it over to hear to do Monday Wednesday and made you something on Friday, but I appreciate you jumping in and as I’ve already commented out to Richard in the comments these are archived. So if you can’t check catch the whole stream live here and you can watch it on LinkedIn.

I posted to my website and YouTube as well. And so I definitely more I contact the final course. I’m looking at the link I put in for my sight and I put in the devil not the actual link. My site is not at localhost. That’s a nice little hack Mark end.

CA that being said maybe. Crossing reference vulnerability there with the Local Host 2019 AWS reinvent the latest content there to let me actually just even remove that comment. So there you can get it down at the bottom three today. This is the oldest of eight of your services at the one that kind of kicked it off and S3 and and easy to wear the first ones that were launched a little over thirteen years ago now crazy absolutely crazy.

It is the one that most people think they understand yet where a lot of the problems come but there’s a huge amount of power here in Amazon S3, and we’re going to talk about that. So what we’re going to go to around at 10:30 today. So book 26 minutes 25 and change remaining and as always we are streaming live to LinkedIn if you have comments fire them up on the stream here.

I am monitoring it up. Stop flow down pretty recently. If you’re watching this on the replay at later on leave a comment down below. I’m on the YouTube video and I will read all those are applied all those and that helps me set up happy content for future. I’m streams.

The next train is going to be on Wednesday and we’re going to talk about two Services Amazon, Athena and Amazon quicksight, which actually linked to S3 really powerful really cool stuff. But today has three, so let’s flip over to on the screen here and There we go. Let me get that one browser window open always resize it to make it a little easier for y’all to see it.

Okay, so I’m going to move myself over in the Stream so you don’t have to worry about me get my timer over here when I make this a little bit smaller. I think cool. Alright, there we go. Amazon S3 have a 3’s has very simply for a simple storage service right? There were just calls it at 3.

Everybody knows what it is. It is a storage system. Basically, it’s a big directory up in the sky very very simply put very easy to understand in this case. Actually if we flip over over to my website just for a second because I know this is here if we’re looking in the shared-responsibility model which we did a whole stream on last week at S3 fits very firmly in this SAS or abstract level server.

Which means we are responsible for the date of that. We put into it and configuring the service and that’s what we’re going to touch on today. And you may think this is really basic stuff and a lot of it is but that’s the whole point of this stream obvious build up to renew its cover some of the basics covered some of the complicated stuff and we’re going to open your eyes I think to a few things in S3 and maybe you’ll understand some of the challenges around it and because he has unfortunately gotten famous private number of data breaches that happen to people or storing data in S3.

That’s not S3 salt. That’s just people failing to understand this model and failing to understand how S3 actually works Miss configuration is the number one Cloud threat number one Cloud prep by far and because we have an insane amount of power, right if we look at all of the services available to us.

So this is the console you can see all of these Services now, how are you as an individual Builder supposed to understand all of the ins-and-outs of all these Price is Right. It’s really really difficult because there’s so much power and our fingertips part of the reason why we’re doing this stream.

We’re going to talk about that self. Let’s Dive Right into this. I am going to show you are a high-level a picture real quick of there we go. So there are three things we need to worry about we’re going to deal with two right off the bat and but I wanted to introduce this concept as we move forward and get if you have questions fire them off here on the LinkedIn stream and we will address them as we go so you can see on the screen.

We’ve got a slide out very very simple and Amazon us we were going to worry about three objects today. The first is literally an object in common parlance. This is a file and the name of that file is called a key. Okay also have a bucket the bucket is where you put everything in Salinas through the first thing you do is you create a bucket and then you just start piling objects in there.

If you want you can use a bucket policy which defines who can shove thing. Is in that bucket who can take things out of that bucket, but right now we’re going to start with just the basics with objects and we have a bucket. So let’s flip back over to our browser and we are going to actually create a new bucket now.

We did this actually really really quick in our first stream on Lando. We’re going to redo it now. So if I click a new bucket now, we’re going to get the wizard to help us through this. And the first thing we need to do is give it a bucket name.

This is the name of the bucket is the way we’re going to be able to out refer to it in case or common parlance that we need to pick the region now, you’ll notice in almost all the regions are available at Hong Kong is disabled because they’re not in China saying with the trainers.

I haven’t turned it on yet, but I can pick in this case. I’m going to pick North East Us East 1 North Virginia Zone. There’s a concept of regions right u.s. East one u.s. West one was to Canada. These are regions in the Region’s there are availability zones, and we do not need to pick an availability zone for S3.

Because S3 spends all of the availability Zone in a given region. That’s a really nice thing for durability. We’re going to talk about that in a few seconds. So here I can copy settings from an existing bucket not going to do that cuz I want to walk you guys through how these settings are set but we first name road to reinvent the name of our Market.

We’re going to leave that in u.s. East we want to keep all versions of all the objects in the same bucket know you can so S3 has a burgeoning system, which means if you put sample. Txt in and then you updated and upload sample. Txt. Again, you can save successive versions, which is really handy to roll back.

So think like an iCloud Drive Dropbox box.net they all have versioning built-in and extremely handy and depending on the kind of data that you’re dealing with for us. We’re not going to turn it on so you don’t want it and save login. Request so you can log all the requests your bucket this logs all the time, but gets in the boats and everything everyday activity on your bucket that can be handy.

Especially if you’re running a website out of your bucket in our case. We don’t want to tags common feature in AWS. Maybe we’ll tackle that in a future stream but tags basically let you put on identifiers keys and values to make sure that you make sense of the resources in your own so far in a big company in here in a shared account you could say a key is team and then the value is, you know, Mark, right so we know Mark’s team has this bucket in our case.

We’re not going to worry about it. And then you have a finer grain object level logging. So the S3 a server access logging of top is served chunky through the course down here. You’ve got some really granular logging and fills up your logs really really quick if you have an active bucket, but you got that option and then you can also check box for automatically encrypting your data when it’s stored in S3 sees me now this year.

Is a server-side key. Which means you do not manage the key and that’s still solid it still adds an additional layer of protection if you want and it’s really as easy as just checking that box. Now we click next get this to our set our permissions. Now, we’re going to spend quite a bit of time on permissions here if you guys have questions on permissions and let me know when the chat here in LinkedIn or after the fact in YouTube chat down below but very very simple.

This screen is one of my all-time favorites and the reason being is that at S3 is commonly misconfigured and people end up leaking a bunch of data out of it you Chris Vickery up guard fantastic creatures or Chris is have he has made his lunch on S3 bucket and Miss configurations for the last few years.

In fact, I’ve actually written a few articles on over on the a cloud Guru blog around S3 security. I’m simply because people seem to have a challenge with this right and very very simply S3 bucket start lock down your explicit choices open them up. Okay. So I’ve written this this article II which actual put the link to in the comments right now S3 security post.

And then there’s actually another S3 security post that I written. That’s a little bit more detailed. Second of three security post typing I didn’t hear for you all in the LinkedIn comments as you’ve got those links if you want to see him this one I think summarizes it nicely just put everything in a bucket and Omega public.

How do you not make them public? Let’s flip back to the console. You can see here. This is a relatively new screen has been in place for about 18 months and one easy easy. Click block all public access. Block all public access. Here’s my number one tip my top tip.

I guess I’ll give it lots of tattoos my top tip when it comes to S3 buckets. If you’re going to have public S3 buckets try to keep them in a completely separate account. You do not pay for a divorce account to pay for AWS account activity. So setting up a whole separate account for public buckets is not going to cost you anything.

Now, you can create complicated permissions that allow you to write from one account into the other but not read which is even better. But you can have that sort of Separation that that firewall of going till you know, here’s the line everything on this side of the line is private everything out in this account is public and that’s a really easy way to make sure you kind of don’t shoot yourself in the foot.

Here’s another way is when you’re setting Your bucket block all public access until this is unchecked through a specific API call or coming into the console to remove this nothing in this bucket can be public. That’s great. That’s what we want. We’re going to override everything else with an explicit block now underneath here.

There’s a ton of options right soon as they block public access to the buckets for anyting is through new list. So keep existing, but don’t add any new don’t let any any control us do it and block Public Access through new public bug has policy nine times out of ten.

You just want to leave this check to block all public access. . We’re going to do that cuz we’re not making a website. We’re going to block Public Access. Where do you cook next now? We can go even finer and set review now. We’re going to see okay road to reinvent you at least we’re disabled versioning at logging disabled tags.

We’ve got server-side encryption on and we’re blocking all public access. Click create the bucket you see real quick. We’ve created a bucket. Okay. So now we have our bucket in place. If we click there. We are going to come into the bucket you’ll see we have nothing in our bucket we have yet to put an object into the bucket we can do that by clicking on the upload button is going to go to the upload dialogue this case.

We’re going to add a file we’re going to go to a desktop and we are going to add this file in which is just a simple image file to gives us an ISO summary. We can add a bunch of files here to do a group upload. You can’t go bigger than 160 gig in one’s upload through the web browser.

You have to use another two. Go to do that and not many people are going to hit that limit. But you know a lot of time you’re not using the web interface to access S3. Anyway, so we click next and here we go to a setting permissions allow permissions again, we’ve already said our bucket permission saying don’t let anything be public now, we can have individual permissions within our object so we can say in this case.

I’m logged in with the AWS account for a account call data PS4 the demo account and it has read permissions as well as read write permissions on the specific object. So can read all objects that I can read and write those I could add more users here. I could add another account.

You could say Okay, Joe’s allowed to access this Francine’s allowed to access this you could create quite a nuanced set of permission for this specific file. Now the way that works as we already have two layers in place. We’ve got a bucket policy up top is a bigger filter and then the object policy.

You’ve got a pass through The bucket policy to ever hit the object. So if you’re blocked at the bucket and allowed at the object that’s going to conflict denies always be allows. So you want to make sure that they’ve got that sort of filtering through text and we pick our tier now for Tears.

You are going to allow so just a real clarify cuz that was going to my head when you have the bucket policy and the object policy in our bucket policy. We’ve said no Public Access. So that’s going to deny any on offend ocated user. So anybody who hasn’t logged in won’t be able to access it and then down to the object level where we say Francine and Joe and Mark can access this object.

So an authenticated user what they Fred gets through that bucket policy because he’s not a public user. He’s a he’s authenticated we know who he is and then he goes the object policy and doesn’t have permission. So he’s not allowed now if we said nobody but Fred is allowed to come through this bucket policy explicitly denying everybody that would stop everybody else.

Right? So there’s a little bit of nuance there, but you get the point bucket policy first and then the object policy, but now we need to pick our storage class know this has a direct impact on your bill. So standard tier is the most expensive Vegetarian maybe a little bit more expensive now.

I’m to start but the idea here is basically how often are you using this stuff? How much durability do you need? So for standard that’s normal standard S3. It goes across 3 availability zones. There is a huge amount of redundancy. I’m just going to flip over the actual detail here.

So if we look at general purpose as 3 standard 11:9 of durability, which means it’s highly unlikely that your object will be changed or anyting flipped as far as bits show resilience against the entire AZ going down because you’re across multiple availability zones availability. That means your ability to get back 90 by 4/9 which is pretty solid.

Right and that is the standard way that you get Storage in at 3 and you can go serve reduce your access from there. So one’s own I say is going to give you can only into one’s own instead of 3, you’re going to keep it for at least 30 days at Glacier goes to Cold Storage in Glacier Glacier deep archive is even colder storage intelligent tiering is a machine is a new level of storage.

It is an intelligent model. We’re basically says I’m going to start in standard and then if you don’t use this object, like if you don’t nobody is accessing and I’m going to put it down to less frequent access and if nobody’s using it then I’m going to push the glacier if still nobody’s using and we can push it too deep archive because the as you go down this chart it’s less money to store.

So Glacier and deep deep archive Glacier are really inexpensive the store they cost a ton of money to pull out though. So the ideas for long-term stuff that you have to legally keep but you probably don’t want to access daily Glacier and deep archive are great for stuff. You don’t really care about the So naive makes a lot of sense and most of time you’re going to just sit and standard having the costs are not at that big.

So it’s you know, it’s a fraction of pennies and things like that. So we’re going to keep in standard. We’re going to upload our next and then we go to our review that’s good. And we upload it and all of a sudden we have a file in S3. It’s called an object.

You’ll notice that it is sitting right now in just a root of that bucket really interesting thing is that there is no directory structure in S3, even though there’s literally a button that calls create folder. The reason being is that this object has a name that name is called the key.

So if you look at this the key for our file right now is the filename right ACG. It’s an ad for the course. I did on mastering the well architected framework. At least this is screenshot and you’ll see we have a URL if I hit this URL right now.

Access denied why is the access denied because I’m not affect ocated on this URL on my Fanta created in the console. I’m not in the URL was blocked all public access. But if we hadn’t blocked all public access, you can actually give out your else for your S3 objects.

So this case the key here if we come back to our key Point pun intended couch fee at there are no file folders, but if I create a file folder, let’s call this test and then save it and now I’m going to upload the exact same file. I’m just going to Breeze through this because we’ve already talked about it.

I’m going through the same dialogue. I’m keeping all the default settings blah blah blah blah blah and this is now uploaded now if I click on this file, you’re going to see the keys changed the filename still the same. I literally uploaded the same father didn’t change anything, but the key is now test / file name and that’s because their file folders are alive.

I file folders are a lioness 3 and they are away that clients have de facto. Visualize the de-facto way to visualize files to make it easier to understand. So if we come back to the root of our bucket, you’re going to see we still have a folder and then we have the file in the route and we have a file in the folder the what the reason is to set up in the keys is that anytime the key start with the same prefix and a slash it just makes more sense for us to be able to manage it but it interesting little bit of a history trivia.

There are no file folders. It’s all a giant giant lie. So if we click back to our key, you’ll see that we’ve got these these the high-level information. So when it was modified the storage class is currently sitting in which is standard its eyes its name. We can also adjust some of the properties here so we can change the storage class we can add metadata to it.

So if you want to do key value pears and add additional information of these files that’s handy for automation. We can also lock this object so that nobody can delete it and we can add more tags to it as well course we can go to permission. This will allow us to add complicated permissions.

You can see here right now with these are the access object for owner would you me and we can add Public Access even though it says we can’t right now because we’ve been blocked which is great. So anything we add under here doesn’t apply because we blocked it but we can add other accounts we can add other users or we can pull our records out of S3, but that’s really simply put how S3 works very easy object in an object which has a key which is the name into a bucket.

I’m going to flip it back to the slide tournament cuz I want to talk in depth about permissions so we know we have an object. We know we have a bucket we know we have a bucket policy. We’ve seen a little bit about the bucket policy. I meant to get if you guys have questions just fire him off in the LinkedIn comments.

I am honoring those actively so I and I expect some questions on this park is it gets kind of weird. So we saw a bit of the bucket policy already, which is When you have a bucket new crate, you can say no public don’t let anything in this be public and while we saw that in a nice little dialogue what was actually happening behind the scenes was the console created a bucket policy for us in that bucket policy said no public you can edit that bucket policy and say public with the following exceptions.

You could remove the public Walker altogether. You can Define how this bucket is used at the idea of a bucket policy. You’re taking a set of permissions and you’re applying it to the bucket itself and that can say check to make sure I know who’s accessing me that can see everybody’s allowed to look at this image file, but not this image file.

There’s a rather complex set of things that you can do makes sense. Now, this was the original way to do policies. I’m in permissions in S3 because the newer way didn’t exist so bucket policies apply to the bucket and then anything that goes in the bucket can have additional object policies on top of it.

The other way primary way to apply for missions in S3 is using AWS IAM the identity and access management tool. So that allows you to assign permissions to users or to roll. So I am a user of markers user a roll goes to an object. So do a AC to server morning stance is the parlance in a day or two container somewhere over to a Lambda function rolls get assigned to things.

Okay. So you can think of the bucket policy is really a role assigned to that bucket, but it’s managed on its own and that’s the problem. That’s where people get messed up really a bucket policy is just a permanent. I am roll assigned to that bucket, but it’s managed in the S3 area of AWS all your other permissions are managed.

Yes. I am. This is where you can say, you know what marks allowed to access this bucket, but only objects that fall under the test folder rights do anything with the prefix test slash marks allowed to access. That you can do that fine grain stuff assigned to the person or the role in I am and that’s really where you should be a signing the vast majority of your permissions stick to one place.

Don’t do both both ends up being confusing do what? It’s Monday. I want to melt your brain a bit because why not brains are for melting. That’s what we’re going to go with for Monday morning. There are two more additional ways to actually apply permissions in AWS. I’m coming back to the how to secure an S3 bucket article a while ago.

So we talked about I am policies we talked about bucket policies. There’s also a thing called an access control list that gets applied and query string authentication or URL based access so you can give somebody a URL and say you can post to this URL or you can get from this URL and it actually goes directly into S3.

There are scenarios where you want to use that however, I strongly as strongly strongly so so strongly recommend that you keep your access all in I am Just do it easier simple. That’s what I am. Therefore. It’s there for keeping all of your permissions in one place and give you a lot of great tools as a lot of great ways to audit check your permissions try to keep them in.

I am as much as possible. If you have to go to the bucket because remember the bucket think of the bucket policy as a role in I am assigned to the bucket permanently. That’s fine. That works, too. Try to avoid Access Control lists try to avoid query string in url base stuff because you’re just going to get lost and if you have to create a new account or new buckets and try not to mix these various ways on the same bucket because there is no simple way to view all of the permissions assigned to a bucket and everything in that bucket.

Okay, that’s why you know, we talked with the start y Mis configurations of the number one problem. There’s too many options. Now those options trust me. There are times when one of these will save your butt. And when you’re in that scenario, you really want that option, but do not mix and match she just because you can doesn’t mean you should a good rule for life in general, but try to separate these by bucket.

If not by account. And again that public thing supercritical to go buy account just to save your you-know-what so that in a nutshell is S3. Now. I’ve got two minutes left on one of the cover something really really simply so you noticed my website here. This is were looking at Mark end.

CA and this is the page. I’m tracking all the stuff we’re doing around reinvents. You see here. Hey, you know next to him is right now that’s what we’re doing today. And then on Wednesday, we’re doing 10 a.m. We’re going to talk about Athena and quicksight and I posted all the stuff we talked about already right now.

This is actually being served out of an S3 bucket. S3 is a full web server super cool. What we’re going to do is flip back the S3 here and we’re going to create road to reinvent. Now we’re going to click next we’re going to leave all of the bucket options as is we are then going to set permissions.

We’re going to remove blocking Public Access and we’re going to create this bucket. Now the reason being in the reason why we called this. Com is because you can actually assign your Bucket to be lineup of the domain name. Okay that were clicked on the properties of the bucket.

We’re going to go to static web hosting and we are going to say use this bucket to host a website indexed. Lhd males are default are. HTML. There’s a bunch of options here for for setting up your website. We’re simply going to cook save then we’re going to go back to the week or this bucket and we’re going to upload a file and upload that exact same file that we offloaded time and time again, when we get to the permissions, what we are going to do is leave that will do in the 2nd standard upload them.

Just looking at the time here. Yeah, it’s been uploaded. So if we go now and check you’ll see the URLs different. That’s fine. But if we click on the URL. We still have access denied just because the bucket is set to host a website doesn’t mean that’s going to work.

So what we need to actually do is go to permissions. Everybody can read this object big morning. Hey, this is public again. Don’t do this in the same account create a second account to do it. We’ve a dummy demo account that we’re looking at today, but now that we’ve assigned this to be a public if we just refresh this we should see that everyone has the read access again.

We come back to our URL. There we go. Now this your LS3 on amazon.com name of our buckets the key name gives us access to this picture. Okay, you can set up an entire website there. That’s in fact what I’ve done with my personal website and marketing. CA that’s in its own separate account and being hosted so that I don’t make the mistake of doing something like my personal photo archive being exposed.

But S3 has the ability to serve websites directly course, you want to make them fast, you can put them in like a cloud front in front of them and put that in a nutshell. We are at our time block here and let me switch back to me that it was S3 your buckets.

You object you put an object in a bucket. You have multiple ways of setting permissions stick to one of them. If you can stick to either I am permissions for everything or bucket policy for everything. If you need to have a private bucket or a public bucket put in a different account Accounts cost you nothing usage cost you so you’re not going to pay any more to put it in a and it’s separate account.

I’m going to pay the exact same thing. So that’s a good strategy operationally. So you don’t have to shoot yourself in the foot by putting something sensitive into a bucket of configured for public because we’ve seen that time and time and time again over the last few years where people have made a simple mistake taking a bucket that starts lockdown and unfortunately adding additional permissions because of the different ways to add permissions, they get tripped up and expose customer records are sensitive data and that is a bad the very bad thing.

And this has been a quick introduction into S3. We’re going to build on Wednesday by looking at Athena and looking at Kwik site. These are tools that help you access the data in S3 or visualize the data in S3, really cool kind of hidden services to my favorite actually are off.

Overlooked and shouldn’t be so let me know what you think in the comments below. Thanks for joining the stream the next one as you saw is on Wednesday at 10 a.m. Eastern again. Normally don’t do Monday money is going too hard and but because I’m at the AWS Summit in Toronto on Thursday and made sense to do this here.

So I hit me up. Let me know. Thanks for joining the stream really appreciate it. Have a great day, and we will talk to you soon. Good morning, everybody. How you doing today? I know it’s Monday always tricky to kind of get motivated to do something on a Monday you’re dealing with the massive amount of stuff in your inbox and you’re trying to get a handle on your week.

And I am at the AWS Summit in Toronto on Thursday, which is why I moved things rather than doing Tuesday Thursday stream. I moved it over to hear to do Monday Wednesday and made you something on Friday, but I appreciate you jumping in and as I’ve already commented out to Richard in the comments these are archived.

So if you can’t check catch the whole stream live here and you can watch it on LinkedIn. I posted to my website and YouTube as well. And so I definitely more I contact the final course. I’m looking at the link I put in for my sight and I put in the devil not the actual link.

My site is not at localhost. That’s a nice little hack Mark end. CA that being said maybe. Crossing reference vulnerability there with the Local Host 2019 AWS reinvent the latest content there to let me actually just even remove that comment. So there you can get it down at the bottom three today.

This is the oldest of eight of your services at the one that kind of kicked it off and S3 and and easy to wear the first ones that were launched a little over thirteen years ago now crazy absolutely crazy. It is the one that most people think they understand yet where a lot of the problems come but there’s a huge amount of power here in Amazon S3, and we’re going to talk about that.

So what we’re going to go to around at 10:30 today. So book 26 minutes 25 and change remaining and as always we are streaming live to LinkedIn if you have comments fire them up on the stream here. I am monitoring it up. Stop flow down pretty recently. If you’re watching this on the replay at later on leave a comment down below.

I’m on the YouTube video and I will read all those are applied all those and that helps me set up happy content for future. I’m streams. The next train is going to be on Wednesday and we’re going to talk about two Services Amazon, Athena and Amazon quicksight, which actually linked to S3 really powerful really cool stuff.

But today has three, so let’s flip over to on the screen here and There we go. Let me get that one browser window open always resize it to make it a little easier for y’all to see it. Okay, so I’m going to move myself over in the Stream so you don’t have to worry about me get my timer over here when I make this a little bit smaller.

I think cool. Alright, there we go. Amazon S3 have a 3’s has very simply for a simple storage service right? There were just calls it at 3. Everybody knows what it is. It is a storage system. Basically, it’s a big directory up in the sky very very simply put very easy to understand in this case.

Actually if we flip over over to my website just for a second because I know this is here if we’re looking in the shared-responsibility model which we did a whole stream on last week at S3 fits very firmly in this SAS or abstract level server. Which means we are responsible for the date of that.

We put into it and configuring the service and that’s what we’re going to touch on today. And you may think this is really basic stuff and a lot of it is but that’s the whole point of this stream obvious build up to renew its cover some of the basics covered some of the complicated stuff and we’re going to open your eyes I think to a few things in S3 and maybe you’ll understand some of the challenges around it and because he has unfortunately gotten famous private number of data breaches that happen to people or storing data in S3.

That’s not S3 salt. That’s just people failing to understand this model and failing to understand how S3 actually works Miss configuration is the number one Cloud threat number one Cloud prep by far and because we have an insane amount of power, right if we look at all of the services available to us.

So this is the console you can see all of these Services now, how are you as an individual Builder supposed to understand all of the ins-and-outs of all these Price is Right. It’s really really difficult because there’s so much power and our fingertips part of the reason why we’re doing this stream.

We’re going to talk about that self. Let’s Dive Right into this. I am going to show you are a high-level a picture real quick of there we go. So there are three things we need to worry about we’re going to deal with two right off the bat and but I wanted to introduce this concept as we move forward and get if you have questions fire them off here on the LinkedIn stream and we will address them as we go so you can see on the screen.

We’ve got a slide out very very simple and Amazon us we were going to worry about three objects today. The first is literally an object in common parlance. This is a file and the name of that file is called a key. Okay also have a bucket the bucket is where you put everything in Salinas through the first thing you do is you create a bucket and then you just start piling objects in there.

If you want you can use a bucket policy which defines who can shove thing. Is in that bucket who can take things out of that bucket, but right now we’re going to start with just the basics with objects and we have a bucket. So let’s flip back over to our browser and we are going to actually create a new bucket now.

We did this actually really really quick in our first stream on Lando. We’re going to redo it now. So if I click a new bucket now, we’re going to get the wizard to help us through this. And the first thing we need to do is give it a bucket name.

This is the name of the bucket is the way we’re going to be able to out refer to it in case or common parlance that we need to pick the region now, you’ll notice in almost all the regions are available at Hong Kong is disabled because they’re not in China saying with the trainers.

I haven’t turned it on yet, but I can pick in this case. I’m going to pick North East Us East 1 North Virginia Zone. There’s a concept of regions right u.s. East one u.s. West one was to Canada. These are regions in the Region’s there are availability zones, and we do not need to pick an availability zone for S3.

Because S3 spends all of the availability Zone in a given region. That’s a really nice thing for durability. We’re going to talk about that in a few seconds. So here I can copy settings from an existing bucket not going to do that cuz I want to walk you guys through how these settings are set but we first name road to reinvent the name of our Market.

We’re going to leave that in u.s. East we want to keep all versions of all the objects in the same bucket know you can so S3 has a burgeoning system, which means if you put sample. Txt in and then you updated and upload sample. Txt. Again, you can save successive versions, which is really handy to roll back.

So think like an iCloud Drive Dropbox box.net they all have versioning built-in and extremely handy and depending on the kind of data that you’re dealing with for us. We’re not going to turn it on so you don’t want it and save login. Request so you can log all the requests your bucket this logs all the time, but gets in the boats and everything everyday activity on your bucket that can be handy.

Especially if you’re running a website out of your bucket in our case. We don’t want to tags common feature in AWS. Maybe we’ll tackle that in a future stream but tags basically let you put on identifiers keys and values to make sure that you make sense of the resources in your own so far in a big company in here in a shared account you could say a key is team and then the value is, you know, Mark, right so we know Mark’s team has this bucket in our case.

We’re not going to worry about it. And then you have a finer grain object level logging. So the S3 a server access logging of top is served chunky through the course down here. You’ve got some really granular logging and fills up your logs really really quick if you have an active bucket, but you got that option and then you can also check box for automatically encrypting your data when it’s stored in S3 sees me now this year.

Is a server-side key. Which means you do not manage the key and that’s still solid it still adds an additional layer of protection if you want and it’s really as easy as just checking that box. Now we click next get this to our set our permissions. Now, we’re going to spend quite a bit of time on permissions here if you guys have questions on permissions and let me know when the chat here in LinkedIn or after the fact in YouTube chat down below but very very simple.

This screen is one of my all-time favorites and the reason being is that at S3 is commonly misconfigured and people end up leaking a bunch of data out of it you Chris Vickery up guard fantastic creatures or Chris is have he has made his lunch on S3 bucket and Miss configurations for the last few years.

In fact, I’ve actually written a few articles on over on the a cloud Guru blog around S3 security. I’m simply because people seem to have a challenge with this right and very very simply S3 bucket start lock down your explicit choices open them up. Okay. So I’ve written this this article II which actual put the link to in the comments right now S3 security post.

And then there’s actually another S3 security post that I written. That’s a little bit more detailed. Second of three security post typing I didn’t hear for you all in the LinkedIn comments as you’ve got those links if you want to see him this one I think summarizes it nicely just put everything in a bucket and Omega public.

How do you not make them public? Let’s flip back to the console. You can see here. This is a relatively new screen has been in place for about 18 months and one easy easy. Click block all public access. Block all public access. Here’s my number one tip my top tip.

I guess I’ll give it lots of tattoos my top tip when it comes to S3 buckets. If you’re going to have public S3 buckets try to keep them in a completely separate account. You do not pay for a divorce account to pay for AWS account activity. So setting up a whole separate account for public buckets is not going to cost you anything.

Now, you can create complicated permissions that allow you to write from one account into the other but not read which is even better. But you can have that sort of Separation that that firewall of going till you know, here’s the line everything on this side of the line is private everything out in this account is public and that’s a really easy way to make sure you kind of don’t shoot yourself in the foot.

Here’s another way is when you’re setting Your bucket block all public access until this is unchecked through a specific API call or coming into the console to remove this nothing in this bucket can be public. That’s great. That’s what we want. We’re going to override everything else with an explicit block now underneath here.

There’s a ton of options right soon as they block public access to the buckets for anyting is through new list. So keep existing, but don’t add any new don’t let any any control us do it and block Public Access through new public bug has policy nine times out of ten.

You just want to leave this check to block all public access. . We’re going to do that cuz we’re not making a website. We’re going to block Public Access. Where do you cook next now? We can go even finer and set review now. We’re going to see okay road to reinvent you at least we’re disabled versioning at logging disabled tags.

We’ve got server-side encryption on and we’re blocking all public access. Click create the bucket you see real quick. We’ve created a bucket. Okay. So now we have our bucket in place. If we click there. We are going to come into the bucket you’ll see we have nothing in our bucket we have yet to put an object into the bucket we can do that by clicking on the upload button is going to go to the upload dialogue this case.

We’re going to add a file we’re going to go to a desktop and we are going to add this file in which is just a simple image file to gives us an ISO summary. We can add a bunch of files here to do a group upload. You can’t go bigger than 160 gig in one’s upload through the web browser.

You have to use another two. Go to do that and not many people are going to hit that limit. But you know a lot of time you’re not using the web interface to access S3. Anyway, so we click next and here we go to a setting permissions allow permissions again, we’ve already said our bucket permission saying don’t let anything be public now, we can have individual permissions within our object so we can say in this case.

I’m logged in with the AWS account for a account call data PS4 the demo account and it has read permissions as well as read write permissions on the specific object. So can read all objects that I can read and write those I could add more users here. I could add another account.

You could say Okay, Joe’s allowed to access this Francine’s allowed to access this you could create quite a nuanced set of permission for this specific file. Now the way that works as we already have two layers in place. We’ve got a bucket policy up top is a bigger filter and then the object policy.

You’ve got a pass through The bucket policy to ever hit the object. So if you’re blocked at the bucket and allowed at the object that’s going to conflict denies always be allows. So you want to make sure that they’ve got that sort of filtering through text and we pick our tier now for Tears.

You are going to allow so just a real clarify cuz that was going to my head when you have the bucket policy and the object policy in our bucket policy. We’ve said no Public Access. So that’s going to deny any on offend ocated user. So anybody who hasn’t logged in won’t be able to access it and then down to the object level where we say Francine and Joe and Mark can access this object.

So an authenticated user what they Fred gets through that bucket policy because he’s not a public user. He’s a he’s authenticated we know who he is and then he goes the object policy and doesn’t have permission. So he’s not allowed now if we said nobody but Fred is allowed to come through this bucket policy explicitly denying everybody that would stop everybody else.

Right? So there’s a little bit of nuance there, but you get the point bucket policy first and then the object policy, but now we need to pick our storage class know this has a direct impact on your bill. So standard tier is the most expensive Vegetarian maybe a little bit more expensive now.

I’m to start but the idea here is basically how often are you using this stuff? How much durability do you need? So for standard that’s normal standard S3. It goes across 3 availability zones. There is a huge amount of redundancy. I’m just going to flip over the actual detail here.

So if we look at general purpose as 3 standard 11:9 of durability, which means it’s highly unlikely that your object will be changed or anyting flipped as far as bits show resilience against the entire AZ going down because you’re across multiple availability zones availability. That means your ability to get back 90 by 4/9 which is pretty solid.

Right and that is the standard way that you get Storage in at 3 and you can go serve reduce your access from there. So one’s own I say is going to give you can only into one’s own instead of 3, you’re going to keep it for at least 30 days at Glacier goes to Cold Storage in Glacier Glacier deep archive is even colder storage intelligent tiering is a machine is a new level of storage.

It is an intelligent model. We’re basically says I’m going to start in standard and then if you don’t use this object, like if you don’t nobody is accessing and I’m going to put it down to less frequent access and if nobody’s using it then I’m going to push the glacier if still nobody’s using and we can push it too deep archive because the as you go down this chart it’s less money to store.

So Glacier and deep deep archive Glacier are really inexpensive the store they cost a ton of money to pull out though. So the ideas for long-term stuff that you have to legally keep but you probably don’t want to access daily Glacier and deep archive are great for stuff. You don’t really care about the So naive makes a lot of sense and most of time you’re going to just sit and standard having the costs are not at that big.

So it’s you know, it’s a fraction of pennies and things like that. So we’re going to keep in standard. We’re going to upload our next and then we go to our review that’s good. And we upload it and all of a sudden we have a file in S3. It’s called an object.

You’ll notice that it is sitting right now in just a root of that bucket really interesting thing is that there is no directory structure in S3, even though there’s literally a button that calls create folder. The reason being is that this object has a name that name is called the key.

So if you look at this the key for our file right now is the filename right ACG. It’s an ad for the course. I did on mastering the well architected framework. At least this is screenshot and you’ll see we have a URL if I hit this URL right now.

Access denied why is the access denied because I’m not affect ocated on this URL on my Fanta created in the console. I’m not in the URL was blocked all public access. But if we hadn’t blocked all public access, you can actually give out your else for your S3 objects.

So this case the key here if we come back to our key Point pun intended couch fee at there are no file folders, but if I create a file folder, let’s call this test and then save it and now I’m going to upload the exact same file. I’m just going to Breeze through this because we’ve already talked about it.

I’m going through the same dialogue. I’m keeping all the default settings blah blah blah blah blah and this is now uploaded now if I click on this file, you’re going to see the keys changed the filename still the same. I literally uploaded the same father didn’t change anything, but the key is now test / file name and that’s because their file folders are alive.

I file folders are a lioness 3 and they are away that clients have de facto. Visualize the de-facto way to visualize files to make it easier to understand. So if we come back to the root of our bucket, you’re going to see we still have a folder and then we have the file in the route and we have a file in the folder the what the reason is to set up in the keys is that anytime the key start with the same prefix and a slash it just makes more sense for us to be able to manage it but it interesting little bit of a history trivia.

There are no file folders. It’s all a giant giant lie. So if we click back to our key, you’ll see that we’ve got these these the high-level information. So when it was modified the storage class is currently sitting in which is standard its eyes its name. We can also adjust some of the properties here so we can change the storage class we can add metadata to it.

So if you want to do key value pears and add additional information of these files that’s handy for automation. We can also lock this object so that nobody can delete it and we can add more tags to it as well course we can go to permission. This will allow us to add complicated permissions.

You can see here right now with these are the access object for owner would you me and we can add Public Access even though it says we can’t right now because we’ve been blocked which is great. So anything we add under here doesn’t apply because we blocked it but we can add other accounts we can add other users or we can pull our records out of S3, but that’s really simply put how S3 works very easy object in an object which has a key which is the name into a bucket.

I’m going to flip it back to the slide tournament cuz I want to talk in depth about permissions so we know we have an object. We know we have a bucket we know we have a bucket policy. We’ve seen a little bit about the bucket policy. I meant to get if you guys have questions just fire him off in the LinkedIn comments.

I am honoring those actively so I and I expect some questions on this park is it gets kind of weird. So we saw a bit of the bucket policy already, which is When you have a bucket new crate, you can say no public don’t let anything in this be public and while we saw that in a nice little dialogue what was actually happening behind the scenes was the console created a bucket policy for us in that bucket policy said no public you can edit that bucket policy and say public with the following exceptions.

You could remove the public Walker altogether. You can Define how this bucket is used at the idea of a bucket policy. You’re taking a set of permissions and you’re applying it to the bucket itself and that can say check to make sure I know who’s accessing me that can see everybody’s allowed to look at this image file, but not this image file.

There’s a rather complex set of things that you can do makes sense. Now, this was the original way to do policies. I’m in permissions in S3 because the newer way didn’t exist so bucket policies apply to the bucket and then anything that goes in the bucket can have additional object policies on top of it.

The other way primary way to apply for missions in S3 is using AWS IAM the identity and access management tool. So that allows you to assign permissions to users or to roll. So I am a user of markers user a roll goes to an object. So do a AC to server morning stance is the parlance in a day or two container somewhere over to a Lambda function rolls get assigned to things.

Okay. So you can think of the bucket policy is really a role assigned to that bucket, but it’s managed on its own and that’s the problem. That’s where people get messed up really a bucket policy is just a permanent. I am roll assigned to that bucket, but it’s managed in the S3 area of AWS all your other permissions are managed.

Yes. I am. This is where you can say, you know what marks allowed to access this bucket, but only objects that fall under the test folder rights do anything with the prefix test slash marks allowed to access. That you can do that fine grain stuff assigned to the person or the role in I am and that’s really where you should be a signing the vast majority of your permissions stick to one place.

Don’t do both both ends up being confusing do what? It’s Monday. I want to melt your brain a bit because why not brains are for melting. That’s what we’re going to go with for Monday morning. There are two more additional ways to actually apply permissions in AWS. I’m coming back to the how to secure an S3 bucket article a while ago.

So we talked about I am policies we talked about bucket policies. There’s also a thing called an access control list that gets applied and query string authentication or URL based access so you can give somebody a URL and say you can post to this URL or you can get from this URL and it actually goes directly into S3.

There are scenarios where you want to use that however, I strongly as strongly strongly so so strongly recommend that you keep your access all in I am Just do it easier simple. That’s what I am. Therefore. It’s there for keeping all of your permissions in one place and give you a lot of great tools as a lot of great ways to audit check your permissions try to keep them in.

I am as much as possible. If you have to go to the bucket because remember the bucket think of the bucket policy as a role in I am assigned to the bucket permanently. That’s fine. That works, too. Try to avoid Access Control lists try to avoid query string in url base stuff because you’re just going to get lost and if you have to create a new account or new buckets and try not to mix these various ways on the same bucket because there is no simple way to view all of the permissions assigned to a bucket and everything in that bucket.

Okay, that’s why you know, we talked with the start y Mis configurations of the number one problem. There’s too many options. Now those options trust me. There are times when one of these will save your butt. And when you’re in that scenario, you really want that option, but do not mix and match she just because you can doesn’t mean you should a good rule for life in general, but try to separate these by bucket.

If not by account. And again that public thing supercritical to go buy account just to save your you-know-what so that in a nutshell is S3. Now. I’ve got two minutes left on one of the cover something really really simply so you noticed my website here. This is were looking at Mark end.

CA and this is the page. I’m tracking all the stuff we’re doing around reinvents. You see here. Hey, you know next to him is right now that’s what we’re doing today. And then on Wednesday, we’re doing 10 a.m. We’re going to talk about Athena and quicksight and I posted all the stuff we talked about already right now.

This is actually being served out of an S3 bucket. S3 is a full web server super cool. What we’re going to do is flip back the S3 here and we’re going to create road to reinvent. Now we’re going to click next we’re going to leave all of the bucket options as is we are then going to set permissions.

We’re going to remove blocking Public Access and we’re going to create this bucket. Now the reason being in the reason why we called this. Com is because you can actually assign your Bucket to be lineup of the domain name. Okay that were clicked on the properties of the bucket.

We’re going to go to static web hosting and we are going to say use this bucket to host a website indexed. Lhd males are default are. HTML. There’s a bunch of options here for for setting up your website. We’re simply going to cook save then we’re going to go back to the week or this bucket and we’re going to upload a file and upload that exact same file that we offloaded time and time again, when we get to the permissions, what we are going to do is leave that will do in the 2nd standard upload them.

Just looking at the time here. Yeah, it’s been uploaded. So if we go now and check you’ll see the URLs different. That’s fine. But if we click on the URL. We still have access denied just because the bucket is set to host a website doesn’t mean that’s going to work.

So what we need to actually do is go to permissions. Everybody can read this object big morning. Hey, this is public again. Don’t do this in the same account create a second account to do it. We’ve a dummy demo account that we’re looking at today, but now that we’ve assigned this to be a public if we just refresh this we should see that everyone has the read access again.

We come back to our URL. There we go. Now this your LS3 on amazon.com name of our buckets the key name gives us access to this picture. Okay, you can set up an entire website there. That’s in fact what I’ve done with my personal website and marketing. CA that’s in its own separate account and being hosted so that I don’t make the mistake of doing something like my personal photo archive being exposed.

But S3 has the ability to serve websites directly course, you want to make them fast, you can put them in like a cloud front in front of them and put that in a nutshell. We are at our time block here and let me switch back to me that it was S3 your buckets.

You object you put an object in a bucket. You have multiple ways of setting permissions stick to one of them. If you can stick to either I am permissions for everything or bucket policy for everything. If you need to have a private bucket or a public bucket put in a different account Accounts cost you nothing usage cost you so you’re not going to pay any more to put it in a and it’s separate account.

I’m going to pay the exact same thing. So that’s a good strategy operationally. So you don’t have to shoot yourself in the foot by putting something sensitive into a bucket of configured for public because we’ve seen that time and time and time again over the last few years where people have made a simple mistake taking a bucket that starts lockdown and unfortunately adding additional permissions because of the different ways to add permissions, they get tripped up and expose customer records are sensitive data and that is a bad the very bad thing.

And this has been a quick introduction into S3. We’re going to build on Wednesday by looking at Athena and looking at Kwik site. These are tools that help you access the data in S3 or visualize the data in S3, really cool kind of hidden services to my favorite actually are off.

Overlooked and shouldn’t be so let me know what you think in the comments below. Thanks for joining the stream the next one as you saw is on Wednesday at 10 a.m. Eastern again. Normally don’t do Monday money is going too hard and but because I’m at the AWS Summit in Toronto on Thursday and made sense to do this here.

So I hit me up. Let me know. Thanks for joining the stream really appreciate it. Have a great day, and we will talk to you soon.

More Content