This talk was delivered at AtlSecCon in Halifax, NS, on 10-Apr-2025
When was the last time you felt like you had enough time in the day to get your work done? Are you exhausted by the never ending firehose of security challenges you have to deal with each and every day?
In this session, we are not going to change that reality. Sorry, security work is continuous, but it doesn’t have to be overwhelming.
This session looks at the workflows around your security practice and how it interacts with the business. Security is a service business, but teams are rarely set up in a way to deliver that service successfully.
There’s a lot of history that contributes to the current state of security teams, but that history typically isn’t serving a purpose. More often than not, the way we’ve built out our work leads to delays, frustrated colleagues, and eventually teams that work around us instead of with us.
This isn’t a talk about simply getting “buy in” from other leaders, it’s about breaking down our security goals and learning from other types of teams and businesses and how they are setup.
You’ll learn about the hidden challenges that impede your work, structures and workflows that can accelerate security improvements, and how to build stronger relationship with the rest of your organization.
I'm confident that most of security professionals will answer this in one of three ways;
"I don't know."
"I don't think they are."
"No."
None of those are great answers to the question.
Nope.
...organizationally 😉
When was the last time you designed a process for your team?
No, I don't mean writing down an playbook (though you should be doing that). I mean working through the steps of a systematic effort in order to design a process that works for your team and your customers.
Have you ever done that?
Let's start with first principles. There's always a reason why things end up in their current state and there's a lot we can learn from that history.
Why do most security teams organize the same way? Is that the best approach? Or just something we ended up with over time due to external factors?
This all started with endpoints.
Acknowledging that there was risk with our desktops (yes, desktops), organizations started to have folks assigned to managing these systems.
Not like we do today, but the first steps were there. Organizing the OS and its updates, anti-virus software, and other steps to help protect the business.
The real nucleus of what we know of as the security team came to be with network controls. Rolling out firewalls, then intrusion prevention, and other controls around the perimeter was enough work that dedicated teams were required.
No more—well, less—side of desk work. We now started to see teams responsibly for the castle wall protecting the "inside" of the business.
As connectivity expanded, we get closer to today. Teams are dealing with endpoint, network, and cloud controls.
While each of these areas contribute to defence in depth, we also approach them based on the security team's level of responsibility or influence.
Endpoint controls are still very much in the "OK, if it doesn't impact anything" bucket. Security teams tread lightly here, so as not to lose trust with the rest of the business.
Network controls are easier to roll out because they are typically entirely within the security team's purview, or at most involve a small handful of infrastructure teams.
Deploying security controls in the cloud can be more direct. WIth all resources available via an API, connecting to systems, monitoring them, and gaining visibility are more straightforward than ever.
But there's more to security than just these three areas. We've expanded to risk practices, compliance activities, and proactive work like threat hunting.
Security teams in medium-sized enterprises, are likely to scale to have one or two—or more—dedicated resources to each of these areas. Larger organizations can even get to the point where they have dedicated teams for each of these areas.
But one thing that tends to hold true—even for the smallest of teams—is that we organize our teams based on function.
This is Francine, she is responsible for our risk practice. Jo takes care of compliance. Etc.
Function structures tend to exhibit these properties:
And it's this last point that is the source of most of our challenges.
I don't think this structure is conducive to workflows that will meet your goals. Or the goals of your customers.
Worse, I don't think that we have the time/energy/awareness to step back and examine the link between our team structure and our workflows.
Simply put, we are too busy doing the work to understand how our approach to the work is making it harder for everyone.
In this section, the audience is asked to—and politely does—participate in a group activity. They say each of the letters as they appear on screen.
A B C D
E F G
H I J K
Stop.
I'm not sure why y'all are doing it this way. Let's restart.
(In person, the audience almost always nails this part. They are saying each of the letters in English at the same time and nailing the beginnings of the song as well.)
A B C D
Stop.
I ask the audience, "Why are you saying it that way?"
They are confused. I then repeat the beginning of the alphabet in Dutch. The letter sounds are very different than the English ones.
The point of this callout is that I had very different expectations for the activity. Expectations that didn't line up with the audience's assumptions.
On the same page & language, we restart.
A B C D
E F G
I ask the audience, "How many vowels have we said so far?"
This breaks the flow of the recitation and song. It's an unexpected question, even though it's simple one to answer.
We restart for the 3rd time.
A B C D
E F G
H I J K
LMNO P
Inevitably, a North American audience will say L, M, N, and O as "elemenopee"
It's a fun call out and it runs counter to the previous pacing, but it aligns with the song.
The point of this is that it's an unspoken change that everyone just gets. They go along with it because of the ingrained cultural elements, not because they talked about it beforehand.
Everyone in the audience (or enough that the point is made) recites the alphabet using, "The ABC Song" instead of just saying each letter in turn.
There wasn't a discussion or agreement to do this. Outside of the subtle hinting in the visuals, it's just what everyone defaults to.
It's a cultural expectation. It's "the way we've always done it".
It's a direct parallel to a lot of activities in organizations and frequently the security team (me as the speaker in this case) is unaware of that expectation!
For fun, the audience gets to repeat the whole song without interruption.
A fantastic amount will also—always—add the bonus line, "Now I know my A B Cs, next time won't you sing with me?"
For a bonus, unreleased tangent, it's pointed out that most folks also can't repeat a segment of the alphabet without starting at A and ending up in a close approximation of the song as well. Human brains are weird!
Everyone knew the song. You default to it, because you learned it and practiced it a lot as a child.
It's a shared experience that reinforces the original experience and understanding.
I restarted the group 4 times. Each time to clarify something for me or to force the group to confirm to my expectations and requirements.
That's a generally frustrating experience. While trying to fulfill my needs, I cost the group time and enjoyment.
...pausing to let that sink in...
Teams generally work well (enough) together.
Don't be the one who disrupts that.
Don't be the one who disrupts that to serve your own needs...even if those needs will help serve the group!
Let's pivot to an even more frustrating topic. But it's a topic that we can actually learn a lot from and relate to as a group.
When they first rolled out, self-checkouts were hailed as technological advancement, a time saver, and an overall benefit to both the business and the customer.
There were some discussions about the balance of those benefits, but outside of the "old man yells at cloud" segment, there wasn't a lot of negativity...at first.
I bring up self-checkouts because I'd like to share a story to help illustrate my overall point of the importance of explicit service design. To help understand how we can all be more effective security practitioners, I'd like to talk to you about my local pharmacy...
Before rolling out self-checkouts about 18 months ago, my pharmacy had six checkout lines.
Each one of the checkouts was staffed. In peak times, they had six employees running the six checkout lines.
If we put ourselves in the owners shoes, the six checkouts—running at a theoretically maximum—would require about $0.59/sale in overhead.
We get to that number by looking at the number of sales each line can process during an hour and the cost to serve that line.
When the pharmacy deployed their self-checkouts, they made a couple of slight adjustments to the traffic flow.
The two middle lanes now were product shelves for those impulse buys. The back wall now housed 3 self-checkouts and so did the left-most checkout line.
The right-most checkout line was kept as a staffed line to help address any customer issues. This employee was also responsible for helping any self-service customers who encounter issues.
Now, when we adjust for the extra time it takes for self-service, the overhead drops significantly for the store.
They are pushing through less sales (120 vs 180), but at 25% of the overhead.
Given the average sale at a pharmacy these days, this probably isn't a great business move. However, the back end costs for employees are going to be significantly higher than maintaining the self-checkout systems.
The self-checkouts also don't have scheduling issues. They are always available and you don't need to try and predict demand. There's a consistency there that simplifies operations.
The problem—ok, a problem—the store encountered quickly was that four of the six self-checkouts weren't seeing much use.
The reason was simple, customers weren't seeing them!
The product displays which were thought to be a clever way to re-purpose the previously staffed checkouts, were interfering with the view of the self-checkouts.
Customers were queueing up like they used to for the staffed checkouts and not taking advantage of the additional self-checkout capacity.
When we look at the throughput from this challenge. The overhead is half of the full service approach, not a quarter of it.
That's a huge impact to the expected savings. This is a problem that needs to be solved.
The solution the pharmacy came up with was to remove the obstructions. This makes perfect sense and really opened up the area.
While it removed the ability to convert the impulse buyers, it made it a lot easier to see the entire set of checkout options.
But there was a problem...
A significant percentage of the customers for the pharmacy are seniors. Seniors who were not having anything to do with the self-checkouts.
When presented with the suite of options, the seniors overwhelmingly selected the full service option. To the point where they were queueing up when almost all—if not all—of the self-checkouts were open.
This reduced the checkout throughput of the store dramatically.
Any guests on how the store "solved" this challenge?
To address this issue, the store put up a new half wall. They physically blocked the direct access to the full service checkout.
The positive (?) aspect to this solution is that it helped to shape the queue. Instead of blocking traffic to the main shopping aisles, the queue now formed in the checkout area.
However, this block reduced the visibility of the full service checkout. The customers who wanted to use it had to now go out of their way to queue up for it...if they saw their preferred option at all.
This also doubled their walk for the workflow. They now had to walk to the queue, move to the full service checkout and then walk past all of the self-checkouts (again) to leave the store.
This is not a good solution and customers complained. To help address this, the store added an additional staff member to help guide more people to the self-checkouts.
In isolation, each of these decisions makes sense. Given problem X, solution Y is a reasonable approach. But, when you examine the overall workflow, the entire problem space, you see how ridiculous these steps are.
From the business perspective, the numbers are better. Overhead is down.
But what about customer satisfaction? This is much harder to measure. Anecdotally, as a customer, I can tell you it's down. How much will that impact their bottom line? I'm not sure.
For our purposes, the key takeaway is that even though the steps taken to address each issue were logical and moved towards the state goal, the result isn't what was intended.
It's not just my experience or this pharmacy, self-checkout has not been an amazing solution.
Through multiple iterations of the various platforms, a positive and smooth self-checkout is a very rare experience. This is now one more thing that we just put up with...despite the general feeling.
Again, this is a result of a series of logical decisions. The problem is that the context window for those decisions got smaller at each and every step.
The end result is a lot of effort and an outcome that may—or may not—align with the actual business goals.
While there are formal methods of doing service design, at it's core, simply asking questions and listening to the feedback will improve your team's workflow significantly.
However, the principles proposed in "This is Service Design Doing" are a great way to establish a shared understanding of what you're setting out to do.
In the simplest terms, those principles are:
"This is Service Design Doing" is an excellent starting point. It's not the only reference out there, but it's very approachable and the Methods book is a great playbook to help you implement changes in your team.
There are a lot of different frameworks for doing risk and threat assessments. There are advantages and disadvantages to each, though really any will do.
The fact that you're conducting assessment—and regularly updating them!?!—is the most important thing.
How many folks use one of these frameworks? Or something similar?
Do you conduct the assessment when the team is writing the code and building the solution?
...when they are testing the solution out?
...or maybe when it comes time to run the solution?
Trying to start and then finish an assessment just as things are going to production is far too common. We—the security team—end up in this position often because of some of the service challenges we're talking about here.
Of course the answer is that you should be doing risk assessments as a continuous process. There is assessment activity at all stages of solution development.
But, this only works if you're collaborating with the builder team. If you have the trust of other groups in the business. You have to work together and towards a common—and commonly understood—goal for this to actually work.
How do you end up in this utopia? This fictitious, "it's easy to put on PowerPoint" world?
The honest, open answer is, "Slowly, patiently, with a series of small steps that each get your closer to your shared goals."
Let's start by looking at the service design principles and the questions we can ask ourselves in order to start to find the path forward.
If we take the customer's perspective, we should have answers to the following questions:
When it comes to risk assessments (and other security work), often the answers are:
Those are not great answers and they are strong indicators that we—the security team—need to be doing a much better job of communicating.
When addressing a good representation of your stakeholders, ask the following of your own team (security):
Making small changes and getting feedback as quickly as possible is one of the most important things you can do for your work.
These are all questions that will help you build your feedback loops and hope you to create a truly iterative process.
In the examples we worked through today, we saw the value of taking the big picture view. Understanding the entire process is the only way to avoid the shrinking context path like we saw with the self-checkout example.
Visualizing and orchestrating the whole process is key to breaking out of your silo. It's how you counter the limitations of the functional team structure.
Too many teams lay out their workflows based on their understanding and expectations of the customer. While it's possible that this might be accurate, it's unlikely.
Getting out and experiencing your customer's reality will help you understand their perspective. That understanding will lead you to better solutions.
My pharmacy didn't understand the majority of its customers. They missed the fundamental frustration that self-checkouts bring up with their older customers. No one wants to feel like they don't understand or that they are the problem and "don't get" the technology.
Sustainability in processes is tied to complexity. Do not attempt to design a process that covers 100% of the edge cases. A workflow that solves 80–85% of the most common cases and has an allowance for the remaining 15% will be far more effective.
When making a decision, the simpler path is where you should be aiming.
If your customers are unhappy, you have work to do. Frustrated teams work around security workflows. Not because they don't want to be secure, but because they want to get their work done.
Security is in their way. You have to avoid that at all costs.
So, do we think that the structure of our teams is influencing our workflows? And that these workflows are not serving our needs or our customers?
I do. And I think we need to change. I confident we can change and that those changes don't need to be all compassing to start.
We start by choosing to address these gaps.
We build a network of support within the business. Build understanding of how other teams work, how they communicate, and how our shared goals align.
You cannot succeed as a security team without the support of other teams in the business. The numbers simply don't add up. You need to succeed together.
The good news is that you have the same goals, you just may be speaking different languages right now or failing to share each others perspective.
You can address these challenges and improve your security by working together. And that starts with you taking a small step towards that goal.
As the Vice President, Cloud Research at Trend Micro, I had a mandate to educate others about cloud security and enough leeway to experiment with how I went about it.
We had a fantastic communications team who were eager to try out new platforms and new approaches. With streaming and podcasting really starting to take off, we launched a new episode show, "Let's Talk Cloud".
Right out of the gate, we knew this was going to be a learning experience for us. We kept the show simple to start with. The first show was a discussion between myself and two of our technical leaders in the field, Jeff Westphal and Fernando Cardoso.
Jeff called in from an event where he was presenting and Fernando was in one of the Trend offices. It was a very scrappy setup, but it worked. The conversation flowed well and we were able to draw in a modest live audience.
For the remaining 5 episodes in the first season, we stayed within the Trend Micro family when recruiting guests. This made it a bit easier to justify the rough edges that we were still smoothing down.
By the end of the first season, we had a reasonable smooth running show that was gaining a lot of traction. The view numbers were nice, but what was more important was how often someone—a customer, a colleague, or a random stranger—would tell me how they had watched an episode and it got them thinking.
For the next season, we were a lot more ambitious in going after guests. We had high profile guests like Forrest Brazeal, Patrick Debois, and Tanya Janca.
Sadly, I moved on from Trend Micro before I was able to film another season. However, our work on this show kicked off an ongoing series for the company. Next up was Let's Talk Security hosted by Rik Ferguson and then #TrendTalksBizSec and #TrendTalksThreats.
Going back through the archives of "Mornings with Mark" has been quite the experience. I've been both fascinated and a little horrified (the hair, the look, the production…yikes) re-watching some of those nearly 200 episodes.
It's interesting to remember that back then (2018—2019), a regular, dedicated vlog focused on cybersecurity and privacy on social media was pretty rare.
"Mornings with Mark" was really a space for me to explore my thoughts on these crucial topics and share some of what I was learning while traveling and teaching cybersecurity. It was also a bit of an experiment with social media and video platforms.
I ended up regularly multi-streaming to LinkedIn (where I was part of the streaming beta program), Twitter, and YouTube. Social media was very different in 2018 and the consistency of the vlog helped grow the audience over time.
Over its run, the show averaged 250 live viewers and then another 1,000 on-demand within that week. Those numbers may seem modest, but to have that type of reach for such a simple and raw vlog was really touching.
It’s nice to know that I was able to help that many people understand security and privacy just a little bit better.
Walking past the display of Leafs memorabilia, I turned the corner, opened the door, and took a seat in the conference tucked away in the Air Canada lounge. I chuckled at the framed magazine covers adorning the walls with a who’s-who of Canada. I set my scorchingly hot Tim Hortons tea on the desk and waited to join the province-wide broadcast on CBC Radio.
In that pause, I took a moment of self-reflection and giggled quietly, half expecting a Mountie on a moose or a Québécois lumberjack with a mountain of syrup-drenched pancakes to pass by.
It felt deeply Canadian. Yes, an unbelievable, absurd, comical amount of Canadian-ness compressed into one moment, but that didn’t diminish my enjoyment. The smile that spread across my face stayed with me the rest of the day.
My first appearance on the network was specifically Canadian as well. In 2014, the CBC was looking for expert commentary on how the Canadian Revenue Agency (CRA) was responding to the serious, widespread software vulnerability.
Having already started to appear in the media semi-regularly the previous year, I was a good fit for the article with my decade of experience in the Canadian public service. My commentary appeared alongside the director of the Canadian Internet Policy and Public Interest Clinic (CIPPIC) and Dr. Christopher Parsons from—at that time—the Citizen Lab.
I was humbled that my commentary was featured with such prominent experts in the field. Experts that I regularly read and still do!
That piece really sparked a passion in me. I enjoyed doing the analysis and offering a pragmatic voice on technology issues. A voice that I hoped—and still hope—helps to balance out other voices in the field.
Even then, I knew that my opinions often run counter to the louder voices that can grab the headlines. I’m ok with that. I’d rather go on the record saying something I believe in, something that I can stand behind even a decade later.
I’m also ok being that pragmatic voice. It’s not as flashy, but I believe that it can deliver more nuance and help make complex issues accessible to everyone.
Over the next 8 years, I would appear more frequently on various CBC properties. From St. John's to Victoria, I always tried to make time to support CBC journalists and hosts who were looking to help Canadians understand what was going on in the world of technology.
I was thrilled when things started to snowball as my comments were published more frequently. This led to a regular spot on TV, appearing on The Exchange with Peter Armstrong. I also covered issues for the CBC News at 6 in cities across the country and was featured in segments on the CBC News Network channel.
Easier—logistically at least—were the radio segments. I've always been an early bird, so when I delivered a couple morning drive-time segments, I started to get called more frequently. I get it, there's not a lot of folks willing to try and distill complicated issues into something easily understandable before 8 am.
Those early morning segments lead to a regular radio column on CBC Ottawa Morning. Once every couple of weeks, I would chat with the host for 6-8 minutes and summarize the news of the moment and try to contextualize it for the audience of 100,000+.
I absolutely loved the challenge of it and got a lot of joy out of helping folks in the region to better understand specific issues.
The process was pretty straight forward. Sometimes the show would reach out the day before and ask if I could talk about a news story. Other times, I would reach out and suggest a topic flying a bit under the radar.
We'd agree on a topic and I would do an initial brief to help the show's researchers start to dig in to prepare the host for the discussion. After that, I would conduct my own research and start to outline the key areas of the issue, its larger context, and try to highlight a few hooks that would help it all land.
I'd circle back to the show with a couple of bullet points to help point the conversation in a productive direction and that was really it for formal preparation. I'd make sure to study my notes and go over key points so that the conversation could flow smoothly while still being informative.
It was great practice for a workflow that continues to help my daily. Being able to identify a topic of interest and then quickly map the landscape around it has been a game changer for me.
This workflow not only satisfies my natural curiosity, but it helps me to consistently contribute to my team and my community.
From 2014 to 2022, I made over 100 appearances on air and in print for the CBC. Each and every time, I tried to help Canadians better understand how technology impacted their lives and communities.
Looking back, I can see how I’ve grown as a communicator. Starting out with safer commentary like a Timbits player taking the field for the first time. With practice, I’ve become more confident expressing my opinions and I’ve found my voice. I moved from just starting facts to crafting explanations that break down complicated issues into simpler, relatable analogies to help everyone understand.
I’ve learned the value of consistently coming back to a topic over and over again. Just because I may be a little tired of talking of security and privacy fundamentals, doesn’t mean everyone is. It’s the patient repetition, the calm explanation of the key issues that truly reaches people.
Technology is complicated. There’s no getting around that. People are hungry to understand the questions technology raises and the questions it helps to answer.
Like that Air Canada lounge seeped in Canadiana, sometimes you need to go above and beyond to get the point across. For me as a security communicator, that means finding the hook inside the story that builds a bridge for the wider audience.
I loved my time on the CBC. It helped me grow as a communicator and touched on a nostalgia I didn't fully appreciate.
CBC regularly archives content from their site. Here are a few articles, videos, and radio segments that are still available to the public.
Here is a sampling of reference notes and materials that I prepared for various segments over the years. These focus in the last few years when I was active with the CBC.
I've archived them here on the site for my own memory, but also to show some of the behind the scenes process that goes into doing a regular technology column on a show.
In the fall of 2012, 5,000 people gathered in Las Vegas for the first AWS re:Invent. I was there and spent almost all of my time with my laptop open, surrounded by other builders, working together to try out new techniques and tricks that we were sharing with each other.
That spirit of community was infectious. After the event, a lot of the connected we made shifted online. And year after year, I would see those friendly faces are various events around the world and we all did our best o make it back to Las Vegas in November for the biggest conference in cloud.
re:Invent grew almost too big. Every year it would expand to move venues and add more sessions. The event was scaled back in the pandemic, with the 2020 edition moving entirely online.
As the world has moved through the pandemic, the conference has grown back to it's previous size and beyond. Almost 60,000 people attended the 2024 event.
it's at the point now where I don't think the hours in the day will permit anything more to be jammed into the week...though I'm sure I'll be surprised.
I've participated in every edition of the conference. As a builder from the start, an AWS Community Hero for ~6 years, and now as an Amazonian. During the period of rapid growth, I started writing an annual guide to the conference.
It started simply enough. I was trying to remind myself how to prepare for a physically and mentally exhausting week. I love attending this show, seeing my friends, making new ones, and learning a ton. But, it can take a lot out of you.
I started to experiment with how I approached the conference. I figured out little tricks that made my week easier. I genuinely wanted others to get the most out of the week too.
Eight times, I published my guide, starting in 2016:
You'll notice that the 3rd edition of the guide introduced the adjective, "ultimate". I debated whether or not to do this at the time.
It's a bold claim and I'm deeply uncomfortable drawing attention to myself.
However, that guide is also a 19 minute read. It's comprehensive to say the least. I think the "ultimate" description is accurate. The guides quickly became a months long effort.
Not because they took that long to write, but information about the show changed in the lead up. AWS would announce the basics (where, when, etc.) and then add more details as they locked things in.
In addition to the level of details, the guides started to get a lot of attention. Each year the audience grew. People would reach out to me with great feedback and share how they had come across the guide and how it helped them.
All said, over the eight guides, more than 500,000 people read them. That's a crazy amount of people and inline with the majority of attendees.
While some companies did try to copy the guides, more simply wrote up their schedules and linked to my work. I really appreciated that and tried to keep things as neutral as possible.
The personal recommendation approach resonated with people. I'd like to think that it helped to seed the idea for the official AWS guides to the event. These guides were written by individuals in the community and helped a specific audience select sessions at re:Invent. I wrote the security guide for the first few times and I'm happy to see the effort continuing to this day.
Looking back at the guides, there are a few tips that still hold up and probably always will:
The guides were a way for me to share my excitement for the show. I always feel an odd combination of exhausting and exhilaration when I attend AWS re:Invent.
There is so much to learn. So many people to connect with. It's a great reminder of the unlimited possibilities that drew me into technology in the first place.
While I don't write the guides anymore, I'm happy I did. I'm even happier that I still get to attend re:Invent—and re:Inforce!—even if it's a little more stressful helping to deliver the show vs. trying to take it all in.
Most of all, I'm glad that I was able to contribute to the amazing cloud community in a meaningful way. I'm happy I still get to contribute and more than a little relieved, those contributions don't needs 3+ months of work each year!
]]>Security is everyone’s responsibility. How is that supposed to work? Our teams have struggled for a long time trying to move away from reactive work to planning and building for a more resilient future.
Is that shift even possible given our small teams and the never ending stream of issues to respond to? How can you scale your security practice in any meaningful way?
Security issues are often deeply technical and nuanced. Delegating work is a constant challenge and it feels like we’re explaining the same things over and over again. Security teams are stuck.
In this talk, we’ll dive deeper in the roles security teams play within most organizations. We’ll explore the common approaches to running a security practice, what works and what doesn’t.
Then, we’ll start to examine communication techniques that can have a positive impact. We’ll look at how you can shift your work from constant response to more impactful efforts by laying the groundwork for others to succeed.
You’ll walk away with a better understanding of the problem your team is facing and some small steps you can take now to enable other people with your organization to make better security decisions.
You are a dedicated security professional. You understand your area of expertise deeply and are working the best you can to help improve the security of your organization.
You're working on a team of like-minded individuals. While it can be challenging always facing threats and trying to help reduce risk, you generally work well together.
The challenge is that your team is accountable for the security of the organization.
But you work with a lot of teams in the rest of the business. Those teams are responsible for various business goals. They are working just as hard to meet those goals.
It can be hard to keep up.
Why is it hard to keep up?
A few years ago, a couple of different analyst firms looked at the ratios of security professionals to the rest of the business.
They found that there was about one full-time security resource for anywhere from 650 to 1,488 other employees.
That's one person responsible for the tools, processes, and output of at least 650 others. Is that even possible?
In 2022, a report from the UK Department for Digital, Culture, Media & Sport provided a similar metric.
They found that 12% of businesses with 250+ employees had 1 person responsible for cybersecurity...and that wasn't necessarily a full-time assignment.
In 2023, Venture Beat conducted a survey and found that most organizations spend just shy of 10% of their IT budget on cybersecurity.
38% of that spend was on staff. That works out to 3.8% of the overall IT budget spend on security personnel.
That sounds like a lot, but there are some of the most highly compensated individuals on staff. Good for those in the industry, still representative of a disproportionate ratio of security folks to the rest of the business.
The Venture Beat survey provides even more insights. Most of the security spending is going to infrastructure and threat intelligence and detection.
That loosely translates into outer perimeter controls and figuring out what's already causing issues within your systems. Very little directly into scaling up the security team or preventing security issues in the first place.
The result of all of this is a lot of security folks feeling burnt out. Security teams are overworked, constantly fighting fires and trying to answer why a significant chunk of the IT budget is being spent on simply not losing ground.
We should do better. Can we?
...or lack there of
The current workflow for most security teams is simple.
A business team has built or bought something and they want to get it into production as quickly as possible. They do have business goals to meet after all.
You, the security person, is the gate they must pass before that happens.
This works-ish. Sadly, it leads to a lot of "hero" behaviour which prevents the actual challenge from being addressed and piles more pressure on the security team members.
The fundamental challenge comes back to that ratio. There are a very limited number of security team members and way, way more business teams.
Security is almost always the slow down or roadblock for their productivity...even thought security is working at 100% or more of expected capacity.
Keeping things at a high level, there are only 2 ways to smooth out this workflow.
You can slow down the incoming work.
or
You can reduce your security goals.
No security team should accept a reduced security posture as a matter of standard practice.
We need to continue to raise the strength and effectiveness of the security posture of our organizations.
We might be able to slow the incoming work down though...we're come back to that in a few.
Now, you can add more folks to the security team. You can scale up the team to handle more work.
This can help.
But, hiring anyone is an ongoing expense (something about always wanted to be paid 😉) and it takes time for new team members to come up to speed.
And as we've already looked at, the ratio of security team members to the rest of the business is so disproportionate that it's unlikely you'd be able to get it down to anything reasonable to actually address these challenges.
This is not a path that will successfully solve this issue.
So, what approach will work?
We—the security team—need to work with our business teams to reduce the issues upstream.
We need less security issues coming to us before systems are rolled out to production.
How do we do that?
Our general approach will be to use mechanisms to reduce our manual efforts.
A mechanism (in this context) means that we're going to try and create a tool of some sort—a process, an automation, etc.—and get folks using it, all while making sure it's delivering what we actually want.
What we don't want is more process and red tape. If something isn't serving the business' end goals, get rid of it!
...sort of
There's a lot we could look at here, but for this talk, we're going to look at the communications side of things.
Can we change the way we communicate and reduce the amount of work our teams are receiving? Can we make it easier to communicate in a more productive way?
Yes, we usually lean into technology to solve problems. We eagerly roll out code and additional layers of systems to address issues as we come across them.
That's not necessarily a bad thing. But, more frequently that we'd like to admit, we just end up with more overhead and challenges that are harder to address because the systems we just deployed have added more constraints!
We're going to take a deeper look at a breach notification from here in Canada. Don't worry, this will be a positive example that we'll be examining to see if we can make some tweaks to improve it even further.
But let's start with a general template for a notification...
The formula for a breach notification—e.g. letting people know there was a security incident and they were affected—is very straightforward...at a high or conceptual level.
It is:
Remember, we're not trying to blame anyone. We're trying to learn!
We're going to dive into a breach TransLink had in 2020. TransLink is responsible for the regional transit network in metro Vancouver.
They were breached in 2020 and the entire recovery and review process took 7 months. That includes the clean up and work with the privacy regulator. The initial incident response appeared to be quite quick.
Overall, I think there communications were good. When compared to a lot of security comms, they probably should be seen as excellent.
But, I'm a bit picky and I think TransLink could've made a couple of small tweaks to really knock it out of the park.
From the TransLink primary web page for this incident:
"
In December 2020, TransLink was the victim of a cyberattack. Upon detection, we took immediate action to shut down multiple computer systems as a protective measure and launched an investigation.
Over the course of the investigation, we worked tirelessly with cybersecurity experts to understand what happened and determine what information was unlawfully accessed. We also worked with law enforcement authorities and notified the Office of the Information and Privacy Commissioner for BC.
This investigation has been a complex and time-consuming process that took months to complete. It involved extensive analysis, the use of e-discovery tools, and manual data reviews.
The privacy review concluded in June 2021.
"
As you can see, that is a solid opening. However, it does fall into some very common traps. Let's make a couple of edits...
In December 2020, TransLink was the victim of a cyberattack. Upon detection, we took immediate action to shut down multiple computer systems as a protective measures and launched an investigation.
Over the course of the investigation, wWe worked tirelessly with cybersecurity experts to understand what happened and determine what information was unlawfully accessed. We also worked with law enforcement authorities and notified the Office of the Information and Privacy Commissioner for BC.
Here is what you need to know about your information.
This investigation has been a complex and time-consuming process that took months to complete. It involved extensive analysis, the use of e-discovery tools, and manual data reviews.
The privacy review concluded in June 2021.
Why those changes?
The original was too complicated, not empathetic, and it didn't set a shared context.
The same changes we made shifted the opening to quickly state what had happened, hint at the scale of effort to respond, and then quickly dives into the number one thing the reader of the letter would want to know.
Of all the common traps the original fell into, the most egregious—yes, even in the context of a good communication, there can be things that are egregious!—is that it's written from what the organization wants you know about the situation, not what the reader wants or needs to know!
Yes, breach notifications and other security communications can be used to reduce damage to an organizations reputation. However, it's critical that you remember that both parties in this communication are victims.
The organization—TransLink in this case—was the victim of cybercrime. The intended reader of this letter were also victims of that same crime.
As long as the origination wasn't derelict in their care of the information, this post shouldn't be written with the tone of "it's not my fault!", but one that lands more along the lines of, "we are both impacted here, but let's start to fix this by focusing on you".
Let's go for a complete re-write. We'll start with a strong and direct opener written with the reader and their position in all of this top of mind.
"
In December 2020, TransLink was hacked. When we found this out, we worked as quickly as possible to protect your data.
"
Simple. Straight to the point. With the first sentence, the reader knows what this communication is about and what happened.
The second puts TransLink in a positive light and it's also—without all of the fancy terminology or long-winded explanation—an accurate description of what happened.
We continue...
"
We brought in cybersecurity experts to help. We also contacted law enforcement and the Office of the Information and Privacy Commissioner for BC.
"
This next section is primarily a regulatory requirement. They need to let the reader know that they've complied with the local privacy legislation.
But, we frame it here as a follow-up to the statement about working as quickly as possible to protect your data.
This way, it shows—in plain language—the effort that the organization went to in response to the breach.
The next line is critical and it's often missing from these types of notifications.
"
We’ve contacted the people whose data was accessed during the hack to help them.
"
Remember, the original text that we're rewriting was published on the TransLink website. It went out to everyone. That makes sense due to the scale of the breach and the nature of the organization. This agency is the regional transit authority and its work impacts everyone in the area.
We add this line as a direct answer to the question in every readers mind, "Was my data breached?". This direct statement answers that near the top, helping the reader focus on the rest of the message.
We follow that up with an explanation of what the reader can find on this page.
"
This webpage contains information about what happened. It listed what data was accessed and what steps we’re taking to try and make sure this doesn’t happen again.
"
And finally, we closing this section with a catch-all to help answer any questions the reader may have after reading the rest of the page. This is may be implied, but by stating it, the reader is reminded of the dynamic and that organization is trying to help reduce the overall risk and any potential harms that may come from the breach.
"
If you have any questions after reading this information, we’ve set up a few different ways to get in touch with us directly. Those methods are listed at the bottom of this page.
"
Again, the communication from TransLink during this incident was great. But, with a few small tweaks, I think we've improved it to focus on what matters most to their target audience.
Our updated version heads off a lot of questions by answering them directly. We also reduced the complexity of the writing making the text easier to read. We've dropped the level equivalent from about 2nd year of University to middle school level (as per the Gunning fog index). That makes the entire text much more accessible.
This approach should reduce the number of inbound requests to the organization. And it's an approach you can use internally to do the same for your team.
Clear communication can reduce your workload.
Let's look at another positive example. This one is from CISA, the Cybersecurity and Infrastructure Security Agency in the US. CISA is the national coordinator for critical infrastructure and resilience in the United States and often acts as a cybersecurity centre of excellence for their public service.
We're going to dig into their Log4j vulnerability guidance page. They got this page up quickly when Log4j went public and used it as the single source of truth for the issue. They updated repeatedly with information about the vulnerability as it came to light and made sure that the page was as comprehensive as possible.
Here's a section of the CISA page that we'll be looking at. It's solid.
But, I do want to point out one approach that may create challenges for the intended audience...
Each of the highlighted passages are technical terms or industry specific language.
That's not necessarily a bad thing. CISA was a specific target audience in mind—security experts.
However, given their position within the US public service, they are also going to have a lot of general IT folks and other various interested folks reading this too.
The question is, can we reduce the specific language without reducing the effectiveness of the writing or the technical details?
We won't go through each term point by point, but here's a quick example of what we could swap out:
- "active, widespread exploitation" => "attackers are currently using this"
- "unauthenticated remote actor" => "attackers don't need to login to use this successfully over the internet"
Yes, sometimes a longer sentence is a clearer one. When in doubt, a longer sentence with less niche terms and more straightforward language is probably going to be more effective.
This also required more context. While this page is for a specific vulnerability, it has a wide ranging impact that is crying out for more context.
The second paragraph with, "...is very broadly used in a variety..." doesn't provide enough context. Something like this might've been more effective, "Log4j is a key building block of a lot of software and most people are unaware their systems are using it. It helps developers write log information that's helpful for troubleshooting, that's why it is a part of a lot of unexpected systems."
Last example, again a positive one.
This time, we'll look at an open source project called Prowler. This is "an open-source security tool designed to assess and enforce security best practices across AWS, Azure, Google Cloud, and Kubernetes".
It's a great project and helps a lot of organizations improve their security posture.
In this example, we're going to look a specific detection from the platform and how it aims to help developers and security folks avoid a security issue.
Here's the detection information in full. It's typically delivered as a JSON object in the platform or teams will route these to Slack or some other system where they are typically working.
This is a solid detection. The description is crystal clear. The risk is well constructed and the recommendation isn't too bad.
But two things jump out at me here.
The first is the opening sentence of the risk, "The use of a hard-coded password increases the possibility of password guessing." That doesn't accurately convey the level of risk.
How much does this increase the possibility of the password being guessed? Is that actually the case with this detection? Why is this worth the time to fix?
The second challenge is the recommended fix. Sure, AWS Secrets Manager could help address the issue. But are there other approaches that would work here? Are there other secrets managers that would work?
Again, the original is solid.
But if it provided more of the why in the risk it would be more useful.
"Hard-coded passwords can be stolen by attackers or accidentally exposed in a source code repository. Avoid this pattern if at all possible, as attackers can easily compromise the account the password has access to."
Similarly, the recommendation can be expanded to help the recipient find the best solution for their situation.
"Using a tool to manage secrets—like AWS Secrets Manager—keeps passwords and other secrets out of your code. This partner makes it easier to update that information (e.g., change the password), while keeping it more secure as the function requests the password only when it's needed."
A couple small adjustments and we've reduce the dots the recipient is required to connect!
As we've seen in the examples we've discussion—and again, they are all positive examples!
We can make some small adjustments to our approach to communication to help everyone make better security decisions and help reduce the incoming requests to our team.
For communications:
We've talked about communications with an eye to how clearer communications can reduce incoming requests to your security team.
We're going to take that a step further and talk about education. One gap most security teams have today is a failure to help the rest of the business understand how to prevent security issues.
I'm not talking about security awareness training (don't even get me started on that) or a patch management process. I'm talking about genuinely investing the time required to help other folks outside of the security team understand how security first thinking can help them.
Your team works regularly with a number of business teams.
As we discussed in the intro for this talk, that ratio is heavily weighted towards the business teams. You can't keep up with the work coming from all of the different business teams.
One way to help with this is to recruit other folks within the organization to advocate for more security-first or security-focused decisions.
Programs that help build this type of internal community go by a few different names—Security Champions, Security Guardians, etc.—for simplicity we'll call them "Security Advocates". Folks in this group—either "officially" recognized or not, are the people that other teams lean on for security help.
Most organizations have folks filling these types of roles for a variety of specializations. Whether it's usability, performance, accessibility, a specific framework, data analysis, etc., there's always that "go-to" for a certain topic.
Even when you don't have a specific program to nurture and expand this community, this type of dynamic still manages to surface. Making it an actual recognized effort has a lot of benefit. The foremost being you can track your efforts and invest (time, money, etc.) where it's having the biggest impact.
Once you've identified these folks, you can start to shift the dynamic between your team and the business teams.
Even if you don't identify these advocates, you should try to shift the dynamic between the security team and the business teams.
Your goal as a security team should be to try and teach the business teams about security as often as possible. With few exceptions, you should try to evolve your current workflows to try and move as much of that work to the business teams as possible.
Now, I know what you're thinking. Why would other teams take on our work? Why do would we want to cede these responsibilities to those teams, what are we supposed to do?
For your work, don't worry. There is and will always be more than enough security work to go around. 🤦
For the business teams, the advantage is easy to understand. They are best positioned to understand the full context of the risk decision (what are the risks of this new feature/solution/product?) and understanding how security can help them meet their business goals, helps them to make better decisions. That improvement helps reduce the time it takes to get things out the door and meet their goals more quickly.
Remember, this is not a complete move of security decisions to the business team. The goal of this effort is to move the decisions that are best made by an informed and educated business team to that team. The security team should be contributing to organization-wide challenges and cross-team risks.
As these efforts mature, your team will do less teaching and more sharing with teh security advocates. They in turn will take on more of the teaching role.
This can happen organically. But in each case where I've seen this type of effort succeed, it's been through a well understood and funded program.
That can mean any number of things, but it's common to have some sort of incentive structure for the advocates. Whether that's perks or specific compensation rewards or a faster path to advancement. Find what works for your organization's culture and make sure that this type of program is set up so that everyone involved sees the benefit.
You may see this and think it'll never work for your organization. Business teams don't care enough about security to give it this type of prioritization. The cooperation you see today is only because teams have to deal with security (whether by regulation or policy).
When I've discussed that idea with executives around the world, I see a common problem. Most people think of security as work to stop bad things from happening. While that's part of it, that's only a fraction of the work under the security umbrella.
The goal of security is simple. It's to make sure that what you build works as intended...and only as intended.
That's a positive goal. Stopping bad things is a negative goal and it's impossible to actually track that. The positive goal is easier to get people to rally around.
When you understand that security is trying to make sure that the work a team is doing works and only does what it's supposed to, now everyone understands they are working towards the same goal!
Security and the business have the same goals.
They all want:
To meet those goals, you need to provide the why.
Why does this request matter? Why is this risk an issue?
If you help people understand the why, they can make better decisions moving forward. We want people to think through each situation that comes up. Technology is too complicated to map out each potential challenger beforehand.
If people understand the context of a requirement, they can make better decisions. As the expert, it's up to you to provide that understanding.
Remember, that you are the security expert. No one shared your context. You have a broad understanding of the thread landscape, the controls within your organization, and the overall risks the business is trying to balance.
The business teams are just trying to get their work done! They have goals they are working towards and are trying to navigate the various systems and processes to the best of their abilities. They are experts in something else entirely and should not be expected to be or become security experts.
Your goal is to make security frictionless. Or maybe a better call out is your goal is to use fiction judicious, helping other people make better decisions.
How can you start? Here are a few ideas for some simple techniques to get the ball rolling:
Let's a take look a how the business team and the security team approach the same issue.
There was a vulnerability in the popular django python framework in 2022. This framework is used to help build web apps and APIs. The vulnerability was an SQL injection—sending bad database requests to generate unexpected results—that could expose data that shouldn't be available.
This was an important issue to fix, but not an emergency. Think weeks, not days.
If we put on our security hat, we see that...
Risk
Exfiltration of all data in connected database
Ease of exploit
Simple. Crafted string input will start attack
Recommendation
Patch all instances of django with available patch to address issues
Likelihood of exploit
???
Recommended timeline
As soon as possible
With our builder/business had on...
Risk
Attackers get all of the data in the django database
Functions impacted
Trunc() & Extract()
Issue
Trunc(kind) & Extract(lookup_name) fail to properly sanitize input
Impact to our app
One use of Trunc() in codebase currently
Recommended timelines
Low priority. Combine with future djano updates
If we line up these perspectives—by working together as we've discussed—here's where we end up:
Impact to our app
Trunc() used once...as of our last code scan
Risk
Access to the entire database
Risk of the fix
Untested changes to access APIs
East of exploit
Simple. One bad input
Recommended timelines
Mitigate now with process changes. Patch within four weeks
Remember, most security teams are feeling the crunch. They are overloaded and under budget pressures.
A lot of that has to do with the fact that a small number of security professionals are accountable for the security of a large number of business teams!
Often security is blocking other work and tries to work harder to solve the problem.
Focus on building out mechanisms that help reduce the manual effort required to do any security work.
However, focusing first on clear communications can help free up resources because you're helping everyone in the organization to better understand security and specific issues without fielding individual questions.
Streamline the work your team does receive by aiming to reduce issues upstream. By education business teams so that they can make stronger security decisions, you'll reduce what falls to your team to handle.
Communicate
Educate
Today Twitter removed the legacy verified checkmark from the platform. Maybe. But also, maybe not?
Who really knows at this point, the specifics and the timeline will be whatever they will be given the current state of the platform. What is crystal clear is that the blue checkmark has now shed the last hints of its original use as a security control.
The original verified checkmark was either the result of a lawsuit or coincidentally an intention feature launched at the same time. The goal was simple; provide a visual indicator that an account was in fact the person or organization it presented as.
At the time—and somehow once again—there was a flood of accounts pretending to be other people on the network.
This process went through a few iterations and had it’s share of challenges. Throughout it’s history, one thing remained constant, the process was a manual verification of the account.
When I applied—yes, I was a legacy—I submitted a copy of my identification and a list of public references in various media publications.
I was denied the first time I applied. Not notable enough.
What did that mean? There was no clear answer. This was a problem throughout the program’s lifecycle. What was this mysterious “notable” bar?
A few weeks later, I re-applied with an expanded list of public references and was verified. At this point in my career I was regularly appearing on CBC television and radio as technology expert and being interviewed about cybersecurity issues by various media outlets around the world.
This work aligned directly with the goal—at that point—of the program. If someone saw me on TV or quoted in an article, they could have a reasonable assurance that tweets from @marknca were from me.
Part of the reason that assurance held up was the rules set out for the accounts with the blue checkmark.
Your profile picture had to be a clear picture of you. While your handle could be anything, your display name needed to be your actual name. You also had to link your phone number to your account—though it was not visible publicly.
You could change your profile picture, but it was supposed to always be a clear picture of you. Any changes to your display name could result in a loss of verification or a re-verification process…though this seemed to be rarely followed up on.
The rules were slightly different for organizations and brands. It wasn’t a perfect setup, but it did a reasonable job of reaching the security goal; helping users spot imposter accounts.
The verification process has been rightly criticized throughout it’s 14 year lifecycle. The biggest complain was the “notable” criteria. It’s always been somewhat hand wavy.
Because an account had to reach some arbitrary level of notoriety, the verification process wasn’t accessible to every user. Scenarios where verification could prevent real harm weren’t addressed and users had to find help—if they could—through other abuse reporting mechanisms on the platform.
At the time and in retrospect, the program would always have this issues because of the manual aspects of verification.
When Twitter launched the program, the platform had somewhere around 35 million active users. Even then, the process wouldn’t scale. Let alone as the platform reached the 450 million users in 2023.
If we do some napkin math, let’s say a full time employee can ‘verify’ 100 accounts per week (about 15 minutes an account). That one employee, only working on verification, will process about 3,800 accounts in a year.
That was 0.0001% of the platform in 2009. And 0.000008% of the platform today.
It was never going to cover everyone.
That failure to scale turned this critical security control into a status symbol. The blue checkmark shifted from being a way to ensure that this account was valid to a social status indicator.
“This account is important.” was the takeaway as verified accounts became a smaller and smaller percentage of the platform.
This was made even more pronounced when the program was closed to new applicants a number of times over the years. In the end, only about 420,000 accounts were verified through this process in over a decade. That’s around 0.0009% of the user base.
That tiny blue checkmark shows up on the user profile page and next to their display name on every single one of their tweets. When a public indicator is only available to a tiny fraction of users, it’s not surprising that it become a desirable badge for everyone on the platform.
Now the blue checkmark is part of the paid, Twitter Blue, subscription. The challenge here is one of mismatched perception. There are plenty of stories showing the impact of shifting the visual indicator away from a identify verification to payment verification.
What’s interesting is looking at this from a security point of view.
I can’t think of any other case where a security control has shifted it’s utility so completely.
The underlying expectation by users—at least in the first few months of the new Twitter Blue era—was that the blue checkmark meant an account wasn’t an impostor.
The reality was that the blue checkmark meant the account had paid a subscription fee…or it might not be an impostor.
This is the worst type of security situation. Users are left in the dark as to what an indicator actually means.
If you followed the news about Twitter and took the time to view the profile of the user in question, you could see that the blue checkmark was from the—now—legacy verification program.
If you didn’t take those steps and were simply operating under the same assumption you had been for the past 14 years, you could easily draw the wrong conclusions about the account. Believing that it was legitimate when it was in fact not.
Security is rarely simple. It would be wonderful if decisions sorted neatly into “secure” and “not secure”, but the reality is that almost any decision can be the best security decision if it’s made with a solid understanding of the trade offs being made.
Security decisions are all about context.
The fundamental challenge with the blue checkmark on Twitter for the past few months—and most likely, for years to come—is that it is trying to balance two completely different context.
The first, a strong assurance that this is who you think it is.
The second, that someone is paying a monthly fee to display a little digital icon next to their name.
That a security control became so desirable is an interesting case study. The challenge is that as it shifted to become a status symbol, it’s become the worst type of control. One that no longer provides any tangible security benefits, but it still widely believed to do so.
]]>Robyn Bresnahan and I chatted about this on CBC Ottawa Morning on 13-Dec-2022.
Machine learning (ML) and artificial intelligence (A.I.) have moved from sci-fi to buzzword to everyday tool. The latest models are making main stream headlines.
Creations from custom art generators like Dall-E 2, Midjourney, and Stable Diffusion have been trending on social media globally by capturing everyone’s imagination. Targeted solutions like custom AI profile avatars and marketing copy generators are finding audiences by creating unique solutions to very specific creative challenges.
Each of these solutions is a type of generative machine learning model. The latest to catch everyone’s attention? ChatGPT.
At the heart of machine learning are models. A model is basically a computer program. But instead of the program delivering very direct results (e.g., click this button to load this webpage), these programs try to predict or infer answers to questions in a specific problem space.
A model is trained on a large amount of data. That training process allows the model (or program) to make connections and identify patterns between various data points. Those patterns are key to the success of the model.
Given new, similar data, the model uses those patterns to predict or infer a result.
Let’s say you train the model on pictures of all sorts of different pets. Dogs, cats, birds, fish, etc. This training is clearly labeled. So, the program looks at the first photo and it is told, “This is a picture of a dog.” Repeat that several hundred thousand or even millions of times.
That builds the pattern of “A dog usually has four legs, fur, a tail, and a long nose.” Another pattern of, “A fish is always in water, has fins, no arms or legs, gills, and a larger tail.”. And so on.
Now, when the program is run, the user gives the model a new picture it’s never seen before. Essentially asking, “What pets are in this photo?”
The model then uses the patterns it knows about to infer what’s in the picture, returning the result of “This is probably a cat and this is probably a dog.” The answer is probably because the model is using inference to basically guess what’s in the picture. It doesn’t actually know what’s in the picture, but based on other picture’s it’s been trained on, these are the most likely answers.
The latest wave of machine learning models work in reverse. Instead of giving them a picture and asking what’s in it, you provide a prompt and the model infers what that might look like or sound like.
Here, I used the prompt, “An odd couple pairing of a cat and a dog. Hayao Miyazaki style, digital painting”

Based on what the model—in this case, Dall-E 2—knows about cats, dogs, and Hayao Miyazaki’s art style, this is what it came up with.
ChatGPT is a wrapper around the GPT-3 generative large language model. It does the same thing as Dall-E, but with text. You provide a prompt and it will deliver a writing sample to match. When I asked ChatGPT to describe itself, this is what it produced…
ChatGPT is a language model that’s been trained to chat with you. It’s based on GPT-3, which is a really advanced model. ChatGPT can generate responses that sound like they’re coming from a real person, so it’s great for things like chatbots or text-based dialogue systems. It’s not a specific product or tool, but more like a type of model that you can use for different applications.
The results can be quite impressive when they are accurate. Remember the model is designed to produce passable prose, not perfectly accurate text content.
ChatGPT takes things a step further and provides a discussion-like interface to it’s model. Using that interface, I explored the key concepts behind this technology. You can read the results of that below in, “A Quick Chat With ChatGPT”
ChatGPT isn’t the end-all, be-all of generative AI. It isn’t going to replace human creative or human effort. It can help accelerate your work though. Allie K. Miller puts together a great list of areas where ChatGPT excels in this Twitter thread 🧵👇
ChatGPT is changing the game, and I want to share real things you can do with this AI system today.
Please save this thread and start testing this technology NOW so you’re ahead of the curve.— Allie K. Miller (@alliekmiller) December 13, 2022
I agree with most of her key areas, specifically:
ChatGPT is a wonderful way to get past staring at a blank page! You can check it out at https://chat.openai.com/chat.
Mark: What is machine learning?
ChatGPT: Machine learning is a type of artificial intelligence that allows software applications to become more accurate at predicting outcomes without being explicitly programmed. It is based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention.
Machine learning algorithms use statistical techniques to find patterns in data and use them to make predictions or take actions. This can be used in a wide range of applications, such as image and speech recognition, natural language processing, and even in self-driving cars.
Mark: How do I make my own model?
ChatGPT: To make your own machine learning model, you will need to gather a large amount of data that is relevant to the problem you are trying to solve. This data will be used to train the model, which is the process of feeding the data into the algorithm and adjusting the parameters of the model to improve its accuracy. Once the model is trained, you can then use it to make predictions or take actions based on new data.
There are many different algorithms and techniques that can be used for training a machine learning model, and the choice of which to use will depend on the specific problem you are trying to solve. Some common algorithms include decision trees, support vector machines, and neural networks.
To create a machine learning model, you will need to have some knowledge of programming and statistics, as well as a basic understanding of the problem you are trying to solve. There are many tutorials and online courses available that can help you learn the basics of machine learning and get started with creating your own model.
Mark: What is the difference between a discriminative and a generative model?
ChatGPT: A discriminative model is a type of machine learning algorithm that is used to predict the value of a target variable (such as whether an email is spam or not) based on a set of input features (such as the sender, subject, and content of the email). It does this by directly modeling the relationship between the input features and the target variable, and using this model to make predictions on new data.
A generative model, on the other hand, is a type of algorithm that is used to learn the underlying distribution of the data, and can then generate new samples that are similar to the data it was trained on. This allows the model to capture the underlying patterns and structure of the data, and can be used to make predictions or take actions based on this learned information.
One key difference between discriminative and generative models is that a discriminative model focuses on predicting the target variable based on the input features, while a generative model is more concerned with modeling the underlying distribution of the data. This means that a generative model is often more flexible and can be used to solve a wider range of problems, but may require more data and computational resources to train.
Mark: What are the biggest technical challenges with generative models?
ChatGPT: There are several technical challenges associated with generative models, some of which include:
Mark: How can i address bias in my models?
ChatGPT: Bias in machine learning models can arise from several sources, including the data used to train the model, the algorithm used to build the model, and the way in which the model is used in practice. To address bias in your models, there are several steps you can take, including:
Mark: What is the difference between training and inference?
ChatGPT: Training and inference are two distinct phases in the development and use of a machine learning model. Training is the process of building a machine learning model by feeding it a large amount of data and adjusting the model’s parameters to improve its performance.
This is typically done using a labeled dataset, where the input data and the corresponding target values are known. The goal of training is to find the model parameters that best fit the data and enable the model to make accurate predictions on new, unseen data.
Inference, on the other hand, is the process of using a trained model to make predictions or take actions on new data. This involves providing the model with input data and using the model to generate predictions or take actions based on that data. Inference can be performed using either the same data as was used for training, or new, unseen data.
The main difference between training and inference is that training involves building and adjusting the model, while inference involves using the trained model to make predictions or take actions on new data. Training is typically done offline, while inference is typically done in real-time.
Mark: What are some examples of problems that machine learning is ideally suited to solve?
ChatGPT: Machine learning is well-suited to a wide range of problems, including:
These are just a few examples of the many problems that machine learning can be used to solve. Machine learning algorithms are highly flexible and can be applied to a wide range of problems in many different fields.
]]>At re:Invent or attending remotely? Check out my “Ultimate Guide to AWS re:Invent 2022” post for some tips and tricks to get the most out of the show.
For the past few years, I’ve had the opportunity to write the “official” guide for attendees of AWS re:Invent that are interested in security. For AWS re:Inforce—the security-focused conference—I get to be a bit more specific.
Each time I create one of the guides for re:Invent my goal is the same; help everyone understand the foundations of cloud security. Regardless of where you are on your cloud journey, there is a ton of value in strong fundamentals.
…and the fundamental truth of cloud security is that it isn’t an isolated discipline. You need to understand how builders are solving problems in the cloud, what AWS has done to deploy strong services, and what options there are available for you to help with your security practice.
Here is the security topic guide for AWS re:Invent 2022 with links to all of the recommended sessions. You can also watch the sessions back-to-back using this handy YouTube playlist.
Security is often misunderstood.
That’s understandable given how it’s usually presented. Let’s fix that.
Simply put, security is one of the main areas of focus when building in the AWS Cloud. It works with the other pillars—Operational Excellence, Performance Efficiency, Cost Optimization, Reliability, and Sustainability—to help you deliver robust, resilient solutions.
I’ve built this guide with that approach in mind. These sessions will help security professionals take a cloud-native approach. It will also help builders of all experience levels understand how security fits into the bigger picture.
There’s a ton of great content this year and these sessions are a great starting point for your security journey and a good reminder of the foundations for those of us further down the path.
Quality service delivery is fundamental to a strong security posture. The principles in the AWS Well-Architected Framework will help your teams build better in the cloud.
EC2 is usually a major part of your compute infrastructure. This session focuses on how to EC2 works at a fleet level. Understanding this is critical to expanding your security efforts beyond the instance itself.
As happens sometimes, this session was renamed and adjusted to focus on new features in Amazon ECS. The original session name was, “The AWS container services journey: A detailed look”
Containers are ubiquitous. So much so that there are a lot of AWS services that can run them. This session explains the use cases for these services and how your teams can use them to their advantage.
Observability has massive benefits for all builders, security teams doubly so. Learn about metrics, traces, log data, and OpenTelemetry and how it can help improve visibility into what’s happening in your builds.
You have access to mountains of data about how your builds are behaving. This session will help you understand what strategies work to manage that data and how to get the most out of it.
This session hasn’t been published yet
If there is a “trick” to compliance, it’s building evidence collection into everything you do. The good news? A lot of the work you’re doing around observability helps with that. This session will show you how.
The more accounts your teams use, the more you realize you need a strategy for visibility across accounts. Focusing on application monitoring, the lessons from this session are directly applicable to security.
The Amazon Builders’ Library is a gold mine of hard fought knowledge about building in the cloud. This session takes a deep dive into how Amazon runs technical operations and what they’ve learned.
This session hasn’t been published yet
You can’t communicate the same information to a development team and your executives. The contexts are completely different. This session maps out how to tackle those differences effectively.
This session explains the business advantages to a strong security approach. Take notes, lots of notes, and use this within your organization. Strong security helps everyone reach their goals faster.
This session hasn’t been published yet
This session was originally titled, “2022 AWS Threat Landscape Report”
AWS Security has a unique perspective on the threat landscape. Learn about what they are seeing and how it impacts your organization…or more importantly, doesn’t impact your organization because of their hard work.
AWS’s approach to security is one worth replicating. The way they’ve distributed responsibility throughout teams that are building in the cloud is highly effective. Learn how you can do the same in this talk.
The AWS Security Reference Architecture is a good starting point for any build. This is a great technical foundation session, as it takes a closer look at the why and how of that design.
Encryption is a critical security control. Looking at its application for data at rest and in transit, this session lays the groundwork for a robust data strategy that will underpin your entire security strategy.
At re:Invent or attending remotely? Check out my “Ultimate Guide to AWS re:Invent 2022” post for some tips and tricks to get the most out of the show.
Thursday at AWS re:Invent 2022 brought yet another amazing keynote from Dr. Werner Vogels. As is his custom, Werner challenges builders to think more broadly about their approach, to push beyond existing perceived constraints and to build better.
My reaction to the keynote in this Twitter thread 👇.

@marknca tweeted at 01-Dec-2022, 16:31
Tweet 2/135 👇 Next tweet 👆 Start
Tweet 3/135 👇 Next tweet 👆 Start
Tweet 4/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:33
Tweet 5/135 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 16:35
Tweet 6/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 16:36
Tweet 7/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 16:36
Tweet 8/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 16:37
Tweet 9/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:37
Tweet 10/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:38
Tweet 11/135 👇 Next tweet 👆 Start




@marknca tweeted at 01-Dec-2022, 16:40
Tweet 12/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:40
Tweet 13/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 16:41
Tweet 14/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 16:42
Tweet 15/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 16:43
Tweet 16/135 👇 Next tweet 👆 Start
Tweet 17/135 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 16:44
Tweet 18/135 👇 Next tweet 👆 Start
Tweet 19/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 16:46
Tweet 20/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:47
Tweet 21/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:47
Tweet 22/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:47
Tweet 23/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:49
Tweet 24/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:49
Tweet 25/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:50
Tweet 26/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:51
Tweet 27/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 16:52
Tweet 28/135 👇 Next tweet 👆 Start
Tweet 29/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:53
Tweet 30/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 16:53
Tweet 31/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 16:53
Tweet 32/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 16:54
Tweet 33/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 16:55
Tweet 34/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:56
Tweet 35/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 16:57
Tweet 36/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 16:59
Tweet 37/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 16:59
Tweet 38/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:00
Tweet 39/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:00
Tweet 40/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:01
Tweet 41/135 👇 Next tweet 👆 Start
Tweet 42/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:02
Tweet 43/135 👇 Next tweet 👆 Start
Tweet 44/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:03
Tweet 45/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:04
Tweet 46/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:05
Tweet 47/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:06
Tweet 48/135 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 17:07
Tweet 49/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:08
Tweet 50/135 👇 Next tweet 👆 Start
Tweet 51/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:09
Tweet 52/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:10
Tweet 53/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:10
Tweet 54/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:11
Tweet 55/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:12
Tweet 56/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:13
Tweet 57/135 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 17:13
Tweet 58/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:14
Tweet 59/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:15
Tweet 60/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:16
Tweet 61/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:17
Tweet 62/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:18
Tweet 63/135 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 17:19
Tweet 64/135 👇 Next tweet 👆 Start
Tweet 65/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:21
Tweet 66/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:21
Tweet 67/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:22
Tweet 68/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:23
Tweet 69/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:24
Tweet 70/135 👇 Next tweet 👆 Start
Tweet 71/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:26
Tweet 72/135 👇 Next tweet 👆 Start




@marknca tweeted at 01-Dec-2022, 17:27
Tweet 73/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:28
Tweet 74/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:29
Tweet 75/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:29
Tweet 76/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:30
Tweet 77/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:31
Tweet 78/135 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 17:32
Tweet 79/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:33
Tweet 80/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:34
Tweet 81/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:35
Tweet 82/135 👇 Next tweet 👆 Start
Tweet 83/135 👇 Next tweet 👆 Start
Tweet 84/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:38
Tweet 85/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:39
Tweet 86/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:40
Tweet 87/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:41
Tweet 88/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:43
Tweet 89/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:44
Tweet 90/135 👇 Next tweet 👆 Start
Tweet 91/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:45
Tweet 92/135 👇 Next tweet 👆 Start
Tweet 93/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:47
Tweet 94/135 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 17:48
Tweet 95/135 👇 Next tweet 👆 Start




@marknca tweeted at 01-Dec-2022, 17:50
Tweet 96/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:50
Tweet 97/135 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 17:52
Tweet 98/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:53
Tweet 99/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:54
Tweet 100/135 👇 Next tweet 👆 Start
Tweet 101/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:55
Tweet 102/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 17:56
Tweet 103/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 17:58
Tweet 104/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 17:58
Tweet 105/135 👇 Next tweet 👆 Start




@marknca tweeted at 01-Dec-2022, 17:59
Tweet 106/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:00
Tweet 107/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 18:00
Tweet 108/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:02
Tweet 109/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:03
Tweet 110/135 👇 Next tweet 👆 Start
Tweet 111/135 👇 Next tweet 👆 Start
Tweet 112/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:04
Tweet 113/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:06
Tweet 114/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 18:07
Tweet 115/135 👇 Next tweet 👆 Start
Tweet 116/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 18:10
Tweet 117/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 18:11
Tweet 118/135 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 18:13
Tweet 119/135 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 18:15
Tweet 120/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 18:16
Tweet 121/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 18:16
Tweet 122/135 👇 Next tweet 👆 Start
Tweet 123/135 👇 Next tweet 👆 Start
Tweet 124/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 18:17
Tweet 125/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:18
Tweet 126/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:18
Tweet 127/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:19
Tweet 128/135 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 18:19
Tweet 129/135 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 18:20
Tweet 130/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:20
Tweet 131/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:21
Tweet 132/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:23
Tweet 133/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:23
Tweet 134/135 👇 Next tweet 👆 Start
Tweet 135/135 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 18:25
At re:Invent or attending remotely? Check out my “Ultimate Guide to AWS re:Invent 2022” post for some tips and tricks to get the most out of the show.
The security leadership session at AWS re:Invent 2022 was called, “What we can learn from customers: Accelerating innovation at AWS Security”. CJ Moses, CISO of AWS, used this session to explain what AWS is doing to help everyone with security, to highlight some key releases in 2022, and what we can expect in 2023 when it comes to security.
CJ definitely put his own stamp on the leadership sessions. More in this Twitter thread 👇.

@marknca tweeted at 01-Dec-2022, 20:32
Tweet 2/57 👇 Next tweet 👆 Start
Tweet 3/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 20:33
Tweet 4/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 20:34
Tweet 5/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 20:34
Tweet 6/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 20:35
Tweet 7/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 20:36
Tweet 8/57 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 20:37
Tweet 9/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 20:39
Tweet 10/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 20:40
Tweet 11/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 20:41
Tweet 12/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 20:42
Tweet 13/57 👇 Next tweet 👆 Start
Tweet 14/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 20:43
Tweet 15/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 20:44
Tweet 16/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 20:44
Tweet 17/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 20:45
Tweet 18/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 20:45
Tweet 19/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 20:46
Tweet 20/57 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 20:47
Tweet 21/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 20:48
Tweet 22/57 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 20:49
Tweet 23/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 20:49
Tweet 24/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 20:50
Tweet 25/57 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 20:51
Tweet 26/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 20:52
Tweet 27/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 20:52
Tweet 28/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 20:53
Tweet 29/57 👇 Next tweet 👆 Start




@marknca tweeted at 01-Dec-2022, 20:55
Tweet 30/57 👇 Next tweet 👆 Start




@marknca tweeted at 01-Dec-2022, 20:57
Tweet 31/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 20:57
Tweet 32/57 👇 Next tweet 👆 Start
Tweet 33/57 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 20:59
Tweet 34/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 20:59
Tweet 35/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 21:01
Tweet 36/57 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 21:01
Tweet 37/57 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 21:02
Tweet 38/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 21:03
Tweet 39/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 21:04
Tweet 40/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 21:05
Tweet 41/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 21:05
Tweet 42/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 21:06
Tweet 43/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 21:07
Tweet 44/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 21:08
Tweet 45/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 21:09
Tweet 46/57 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 21:10
Tweet 47/57 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 21:11
Tweet 48/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 21:12
Tweet 49/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 21:13
Tweet 50/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 21:15
Tweet 51/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 21:16
Tweet 52/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 21:18
Tweet 53/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 21:21
Tweet 54/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 21:24
Tweet 55/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 21:25
Tweet 56/57 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 21:27
Tweet 57/57 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 21:28
At re:Invent or attending remotely? Check out my “Ultimate Guide to AWS re:Invent 2022” post for some tips and tricks to get the most out of the show.
The data and machine learning keynote at AWS re:Invent is always full of surprised. Swami Sivasubramanian brings a ton of energy to the stage and always have a strong theme and narrative for this talks. This year was no exception.
The keynote was a mix of new features, great use cases, and a call to action for builders. Below is my reaction as it happened 👇.

@marknca tweeted at 30-Nov-2022, 16:30
Tweet 2/119 👇 Next tweet 👆 Start
Tweet 3/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 16:33
Tweet 4/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 16:33
Tweet 5/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 16:34
Tweet 6/119 👇 Next tweet 👆 Start




@marknca tweeted at 30-Nov-2022, 16:35
Tweet 7/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 16:36
Tweet 8/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 16:38
Tweet 9/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 16:38
Tweet 10/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 16:40
Tweet 11/119 👇 Next tweet 👆 Start
Tweet 12/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 16:41
Tweet 13/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 16:41
Tweet 14/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 16:43
Tweet 15/119 👇 Next tweet 👆 Start
Tweet 16/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 16:44
Tweet 17/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 16:45
Tweet 18/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 16:45
Tweet 19/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 16:46
Tweet 20/119 👇 Next tweet 👆 Start
Tweet 21/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 16:48
Tweet 22/119 👇 Next tweet 👆 Start
Tweet 23/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 16:49
Tweet 24/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 16:49
Tweet 25/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 16:50
Tweet 26/119 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 16:51
Tweet 27/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 16:52
Tweet 28/119 👇 Next tweet 👆 Start
Tweet 29/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 16:53
Tweet 30/119 👇 Next tweet 👆 Start
Tweet 31/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 16:55
Tweet 32/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 16:56
Tweet 33/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 16:56
Tweet 34/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 16:57
Tweet 35/119 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 16:58
Tweet 36/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 16:58
Tweet 37/119 👇 Next tweet 👆 Start
Tweet 38/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 16:59
Tweet 39/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:01
Tweet 40/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:01
Tweet 41/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:03
Tweet 42/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:04
Tweet 43/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:05
Tweet 44/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:07
Tweet 45/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:07
Tweet 46/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:08
Tweet 47/119 👇 Next tweet 👆 Start
Tweet 48/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:10
Tweet 49/119 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 17:11
Tweet 50/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:12
Tweet 51/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:12
Tweet 52/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:13
Tweet 53/119 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 17:14
Tweet 54/119 👇 Next tweet 👆 Start




@marknca tweeted at 30-Nov-2022, 17:15
Tweet 55/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:17
Tweet 56/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:19
Tweet 57/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:19
Tweet 58/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:20
Tweet 59/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:20
Tweet 60/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:21
Tweet 61/119 👇 Next tweet 👆 Start
Tweet 62/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:23
Tweet 63/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:24
Tweet 64/119 👇 Next tweet 👆 Start
Tweet 65/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:25
Tweet 66/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:26
Tweet 67/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:27
Tweet 68/119 👇 Next tweet 👆 Start




@marknca tweeted at 30-Nov-2022, 17:29
Tweet 69/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:30
Tweet 70/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:30
Tweet 71/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:31
Tweet 72/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:31
Tweet 73/119 👇 Next tweet 👆 Start
Tweet 74/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:32
Tweet 75/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:33
Tweet 76/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:34
Tweet 77/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:35
Tweet 78/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:36
Tweet 79/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:37
Tweet 80/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:38
Tweet 81/119 👇 Next tweet 👆 Start




@marknca tweeted at 30-Nov-2022, 17:39
Tweet 82/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:40
Tweet 83/119 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 17:41
Tweet 84/119 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 17:43
Tweet 85/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:44
Tweet 86/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:46
Tweet 87/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:47
Tweet 88/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:48
Tweet 89/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:49
Tweet 90/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 17:49
Tweet 91/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:50
Tweet 92/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:51
Tweet 93/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:51
Tweet 94/119 👇 Next tweet 👆 Start
Tweet 95/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:52
Tweet 96/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:53
Tweet 97/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:53
Tweet 98/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:54
Tweet 99/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:55
Tweet 100/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 17:55
Tweet 101/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 17:56
Tweet 102/119 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 17:59
Tweet 103/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 18:00
Tweet 104/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 18:03
Tweet 105/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 18:04
Tweet 106/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 18:05
Tweet 107/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 18:06
Tweet 108/119 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 18:06
Tweet 109/119 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 18:07
Tweet 110/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 18:08
Tweet 111/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 18:10
Tweet 112/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 18:11
Tweet 113/119 👇 Next tweet 👆 Start
Tweet 114/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 18:13
Tweet 115/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 18:14
Tweet 116/119 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 18:16
Tweet 117/119 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 18:17
Tweet 118/119 👇 Next tweet 👆 Start
Tweet 119/119 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 18:19
At re:Invent or attending remotely? Check out my “Ultimate Guide to AWS re:Invent 2022” post for some tips and tricks to get the most out of the show.
Ruba Borno delivered a great AWS Partner keynote at this year’s AWS re:Invent. This keynote is very different than the others. I always think of it like a series of sound bites. In small bites, it makes sense. As a whole it can be tough to find the common thread.
That thread is really that AWS is focusing on partner success. They are putting effort behdin partners because it’s driving customer success. My reaction in this Twitter thread 👇.
@marknca tweeted at 30-Nov-2022, 23:01
Tweet 2/74 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 23:02
Tweet 3/74 👇 Next tweet 👆 Start
Tweet 4/74 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 23:04
Tweet 5/74 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 23:04
Tweet 6/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:05
Tweet 7/74 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 23:06
Tweet 8/74 👇 Next tweet 👆 Start
Tweet 9/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:07
Tweet 10/74 👇 Next tweet 👆 Start
Tweet 11/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:09
Tweet 12/74 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 23:11
Tweet 13/74 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 23:11
Tweet 14/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:12
Tweet 15/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:13
Tweet 16/74 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 23:16
Tweet 17/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:19
Tweet 18/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:20
Tweet 19/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:21
Tweet 20/74 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 23:21
Tweet 21/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:22
Tweet 22/74 👇 Next tweet 👆 Start
Tweet 23/74 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 23:23
Tweet 24/74 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 23:23
Tweet 25/74 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 23:24
Tweet 26/74 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 23:24
Tweet 27/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:25
Tweet 28/74 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 23:26
Tweet 29/74 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 23:36
Tweet 30/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:36
Tweet 31/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:38
Tweet 32/74 👇 Next tweet 👆 Start
Tweet 33/74 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 23:39
Tweet 34/74 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 23:40
Tweet 35/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:40
Tweet 36/74 👇 Next tweet 👆 Start




@marknca tweeted at 30-Nov-2022, 23:41
Tweet 37/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:43
Tweet 38/74 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 23:44
Tweet 39/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:45
Tweet 40/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:46
Tweet 41/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:46
Tweet 42/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:47
Tweet 43/74 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 23:47
Tweet 44/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:48
Tweet 45/74 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2022, 23:49
Tweet 46/74 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2022, 23:51
Tweet 47/74 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2022, 23:53
Tweet 48/74 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2022, 23:54
Tweet 49/74 👇 Next tweet 👆 Start
Tweet 50/74 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 00:00
Tweet 51/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:00
Tweet 52/74 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 00:01
Tweet 53/74 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 00:02
Tweet 54/74 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 00:03
Tweet 55/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:03
Tweet 56/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:04
Tweet 57/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:05
Tweet 58/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:06
Tweet 59/74 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 00:08
Tweet 60/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:09
Tweet 61/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:11
Tweet 62/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:15
Tweet 63/74 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 00:18
Tweet 64/74 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 00:19
Tweet 65/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:19
Tweet 66/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:19
Tweet 67/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:20
Tweet 68/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:20
Tweet 69/74 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 00:22
Tweet 70/74 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2022, 00:25
Tweet 71/74 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2022, 00:27
Tweet 72/74 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 00:30
Tweet 73/74 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2022, 00:30
Tweet 74/74 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2022, 00:31
At re:Invent or attending remotely? Check out my “Ultimate Guide to AWS re:Invent 2022” post for some tips and tricks to get the most out of the show.
AWS re:Invent 2022 is truly rolling now. CEO Adam Selipsky gave a fantastic keynote that covered a number of service launches, new features, and great customer stories. I live tweeted the keynote providing my reaction to each segment 👇.

@marknca tweeted at 29-Nov-2022, 16:32
Tweet 2/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 16:33
Tweet 3/145 👇 Next tweet 👆 Start
Tweet 4/145 👇 Next tweet 👆 Start




@marknca tweeted at 29-Nov-2022, 16:35
Tweet 5/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 16:36
Tweet 6/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 16:37
Tweet 7/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 16:38
Tweet 8/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 16:39
Tweet 9/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 16:39
Tweet 10/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 16:42
Tweet 11/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 16:42
Tweet 12/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 16:43
Tweet 13/145 👇 Next tweet 👆 Start
Tweet 14/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 16:45
Tweet 15/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 16:45
Tweet 16/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 16:47
Tweet 17/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 16:47
Tweet 18/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 16:49
Tweet 19/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 16:49
Tweet 20/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 16:50
Tweet 21/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 16:51
Tweet 22/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 16:51
Tweet 23/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 16:52
Tweet 24/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 16:53
Tweet 25/145 👇 Next tweet 👆 Start
Tweet 26/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 16:54
Tweet 27/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 16:55
Tweet 28/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 16:55
Tweet 29/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 16:56
Tweet 30/145 👇 Next tweet 👆 Start




@marknca tweeted at 29-Nov-2022, 16:57
Tweet 31/145 👇 Next tweet 👆 Start




@marknca tweeted at 29-Nov-2022, 17:00
Tweet 32/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:00
Tweet 33/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:02
Tweet 34/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 17:03
Tweet 35/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:05
Tweet 36/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:05
Tweet 37/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:06
Tweet 38/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:07
Tweet 39/145 👇 Next tweet 👆 Start
Tweet 40/145 👇 Next tweet 👆 Start
Tweet 41/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:08
Tweet 42/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 17:09
Tweet 43/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:10
Tweet 44/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:11
Tweet 45/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:12
Tweet 46/145 👇 Next tweet 👆 Start
Tweet 47/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:13
Tweet 48/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:14
Tweet 49/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 17:14
Tweet 50/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:16
Tweet 51/145 👇 Next tweet 👆 Start
Tweet 52/145 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 17:17
Tweet 53/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:18
Tweet 54/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:19
Tweet 55/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:20
Tweet 56/145 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 17:21
Tweet 57/145 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 17:22
Tweet 58/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 17:23
Tweet 59/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:24
Tweet 60/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:25
Tweet 61/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:26
Tweet 62/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:26
Tweet 63/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:27
Tweet 64/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:27
Tweet 65/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:28
Tweet 66/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:29
Tweet 67/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 17:29
Tweet 68/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:30
Tweet 69/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 17:32
Tweet 70/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:33
Tweet 71/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:34
Tweet 72/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:34
Tweet 73/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:36
Tweet 74/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:39
Tweet 75/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:39
Tweet 76/145 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 17:40
Tweet 77/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:40
Tweet 78/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 17:41
Tweet 79/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:42
Tweet 80/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:42
Tweet 81/145 👇 Next tweet 👆 Start
Tweet 82/145 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 17:44
Tweet 83/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:46
Tweet 84/145 👇 Next tweet 👆 Start
Tweet 85/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:48
Tweet 86/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:49
Tweet 87/145 👇 Next tweet 👆 Start
Tweet 88/145 👇 Next tweet 👆 Start
Tweet 89/145 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 17:51
Tweet 90/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 17:52
Tweet 91/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 17:53
Tweet 92/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:54
Tweet 93/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 17:55
Tweet 94/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 17:56
Tweet 95/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:56
Tweet 96/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 17:57
Tweet 97/145 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 17:58
Tweet 98/145 👇 Next tweet 👆 Start
Tweet 99/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:00
Tweet 100/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:01
Tweet 101/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 18:01
Tweet 102/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:01
Tweet 103/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 18:02
Tweet 104/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:03
Tweet 105/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 18:04
Tweet 106/145 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 18:04
Tweet 107/145 👇 Next tweet 👆 Start
Tweet 108/145 👇 Next tweet 👆 Start
Tweet 109/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 18:07
Tweet 110/145 👇 Next tweet 👆 Start
Tweet 111/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 18:08
Tweet 112/145 👇 Next tweet 👆 Start
Tweet 113/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 18:09
Tweet 114/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:10
Tweet 115/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 18:11
Tweet 116/145 👇 Next tweet 👆 Start
Tweet 117/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 18:14
Tweet 118/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 18:15
Tweet 119/145 👇 Next tweet 👆 Start
Tweet 120/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:17
Tweet 121/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 18:18
Tweet 122/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 18:21
Tweet 123/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 18:21
Tweet 124/145 👇 Next tweet 👆 Start
Tweet 125/145 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 18:23
Tweet 126/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:23
Tweet 127/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:24
Tweet 128/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:25
Tweet 129/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 18:26
Tweet 130/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:27
Tweet 131/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:28
Tweet 132/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:29
Tweet 133/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 18:30
Tweet 134/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 18:31
Tweet 135/145 👇 Next tweet 👆 Start
Tweet 136/145 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 18:33
Tweet 137/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:34
Tweet 138/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 18:35
Tweet 139/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:35
Tweet 140/145 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 18:36
Tweet 141/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:37
Tweet 142/145 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 18:38
Tweet 143/145 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 18:39
Tweet 144/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:40
Tweet 145/145 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 18:40
At re:Invent or attending remotely? Check out my “Ultimate Guide to AWS re:Invent 2022” post for some tips and tricks to get the most out of the show.
Every year, AWS launches a lot of new features and functionality leading up to and during AWS re:Invent.
As we kick off day one of this year’s show, here are my top five cloud security-related announcements for pre:Invent.
AWS Organizations is the only thing that keeps me sane when managing all of my AWS accounts. I have a few personal accounts, but it’s enough to see the benefits of the central management through AWS Organizations.
Multiple that challenge/relieve by a few hundred and you’ll quickly start to grasp the challenge of most businesses.
An AWS account is a wonderful, free security boundary. You should be using a lot of them.
This latest feature in AWS Organizations makes it easier to delegate administrative access. You can now delegate policy management to specific member accounts as needed.
This should make things a lot easier and help you maintain separation of duties within your security practice.
As is the case with any new AWS service, Amazon CodeWhisperer has been steadily improving since its announcement.
It’s getting to the point where more and more builders are going to want to try it out. That raises some interesting questions within your organization. Questions that are well worth debating.
See the discussions and posts around GitHub CoPilot for more.
This latest feature release allows you enable CodeWhisperer within your Single Sign-On (SSO) authentication systems. They’ve also added the ability to deliver this service to users that don’t have an AWS account via AWS Builder ID.
These are much needed to controls to help you enable builders within your organization safely.
“I would never log sensitive information”, said the developer about to inevitably 🤦.
It happens. We’ve all done it at some point.
This new feature from Amazon CloudWatch Logs uses a ton of pattern matching and machine learning in order to find any sensitive data that just happen to make it into your logs.
Once found, the service will automatically mask the data based on the policy you configure.
IAM policies dictate who can get the unmasked data with a specialize query via CloudWatch Logs.
I love features like this. There’s a minimal impact on workflow, but a huge boost to your security posture.
Sticking with Amazon CloudWatch, you can now (finally) search analyze, and correlate telemetry accounts accounts!
Dubbed cross-account observability, this feature follows the model of a lot of the AWS Security Services (like AWS Security Hub).
You can declare an AWS account a monitoring account and connect other accounts (source accounts) to view their data.
This integrates with AWS Organizations and IAM to give you a ton of flexibility right out of the gate.
Better still, no extra charges for logs or metrics. Tracing does have additional charges cross-account though.
A trifecta of improvements from AWS Backup. And yes, that’s an actual service name.
Side note, the most amazing AWS service name hands-down is: AWS Managed Services. Which expands to Amazon Web Services Managed Services the service 👨🍳
AWS Backup now:
Each of these helps integrate AWS Backup into your cloud environment. Backups are a critical part of resiliency and anything that makes that process easier to manage (and test!) is a welcome addition.
These are just a few of the announcements from the pre:Invent build up. I expect more security announcements during the week.
Remember, with few exceptions, these announcements are usually laser focused on solving a specific problems. We’re at the point now where I don’t expect too many completely new services…though I’m happy to see new ones launch!
Realistically, every new feature that makes it easier to implement key features of a security practice are a welcome addition.
In addition to these features, AWS announced the AWS Digital Sovereignty Pledge.
This is a clear statement of how your data moves in the AWS Cloud and where it is stored. It’s worth reading through it to understand what AWS has been building for the past 15+ years. None of this happened overnight, but where the state of security is the cloud is now is a very good place.
]]>At re:Invent or attending remotely? Check out my “Ultimate Guide to AWS re:Invent 2022” post for some tips and tricks to get the most out of the show.
You’ve probably read it in the guide 👆 or heard me saying it a bunch in person or on Twitter, AWS pushes out a lot of updates during AWS re:Invent.
When I shared this thread 👇 on Twitter just before 2pm eastern, AWS had already published 52 different updates. WTF?!? That’s crazy. Good crazy, but still crazy.
More on the way for sure…
@marknca tweeted at 28-Nov-2022, 18:31
Tweet 2/10 👇 Next tweet 👆 Start
@marknca tweeted at 28-Nov-2022, 18:31
Tweet 3/10 👇 Next tweet 👆 Start
@marknca tweeted at 28-Nov-2022, 18:31
Tweet 4/10 👇 Next tweet 👆 Start
@marknca tweeted at 28-Nov-2022, 18:31
Tweet 5/10 👇 Next tweet 👆 Start
@marknca tweeted at 28-Nov-2022, 18:31
Tweet 6/10 👇 Next tweet 👆 Start
@marknca tweeted at 28-Nov-2022, 18:31
Tweet 7/10 👇 Next tweet 👆 Start
@marknca tweeted at 28-Nov-2022, 18:31
Tweet 8/10 👇 Next tweet 👆 Start
@marknca tweeted at 28-Nov-2022, 18:31
Tweet 9/10 👇 Next tweet 👆 Start
@marknca tweeted at 28-Nov-2022, 18:31
Tweet 10/10 👇 Next tweet 👆 Start
@marknca tweeted at 28-Nov-2022, 18:31
At re:Invent or attending remotely? Check out my “Ultimate Guide to AWS re:Invent 2022” post for some tips and tricks to get the most out of the show.
Monday Night Live with Peter DeSantis is always a fantastic talk. Peter talks us behinds the scenes of AWS and shows how various technical improvements are enabling the features and functionality that we know and love.
I live tweeted the keynote. Frustratingly in a split thread, but here are both parts…

@marknca tweeted at 29-Nov-2022, 03:31
Tweet 2/3 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:33
Tweet 3/3 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:34


@marknca tweeted at 29-Nov-2022, 03:35
Tweet 2/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 03:36
Tweet 3/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 03:37
Tweet 4/71 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 03:39
Tweet 5/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:40
Tweet 6/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:41
Tweet 7/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:42
Tweet 8/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:42
Tweet 9/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:43
Tweet 10/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 03:44
Tweet 11/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 03:45
Tweet 12/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:46
Tweet 13/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 03:46
Tweet 14/71 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 03:48
Tweet 15/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:49
Tweet 16/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:49
Tweet 17/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 03:51
Tweet 18/71 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 03:54
Tweet 19/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:55
Tweet 20/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:55
Tweet 21/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 03:56
Tweet 22/71 👇 Next tweet 👆 Start
Tweet 23/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 03:57
Tweet 24/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 03:58
Tweet 25/71 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 03:59
Tweet 26/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 04:01
Tweet 27/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 04:03
Tweet 28/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:04
Tweet 29/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:05
Tweet 30/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 04:06
Tweet 31/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:06
Tweet 32/71 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 04:08
Tweet 33/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 04:10
Tweet 34/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:11
Tweet 35/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:12
Tweet 36/71 👇 Next tweet 👆 Start



@marknca tweeted at 29-Nov-2022, 04:15
Tweet 37/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:16
Tweet 38/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:17
Tweet 39/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:18
Tweet 40/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 04:19
Tweet 41/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 04:22
Tweet 42/71 👇 Next tweet 👆 Start




@marknca tweeted at 29-Nov-2022, 04:23
Tweet 43/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:25
Tweet 44/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 04:26
Tweet 45/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 04:27
Tweet 46/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 04:27
Tweet 47/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:28
Tweet 48/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:29
Tweet 49/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 04:30
Tweet 50/71 👇 Next tweet 👆 Start
Tweet 51/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 04:31
Tweet 52/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 04:31
Tweet 53/71 👇 Next tweet 👆 Start
Tweet 54/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 04:32
Tweet 55/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:33
Tweet 56/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 04:34
Tweet 57/71 👇 Next tweet 👆 Start
Tweet 58/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:36
Tweet 59/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:37
Tweet 60/71 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2022, 04:37
Tweet 61/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 04:38
Tweet 62/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 04:39
Tweet 63/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 04:41
Tweet 64/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:42
Tweet 65/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 04:43
Tweet 66/71 👇 Next tweet 👆 Start


@marknca tweeted at 29-Nov-2022, 04:45
Tweet 67/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:46
Tweet 68/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:48
Tweet 69/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:49
Tweet 70/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:50
Tweet 71/71 👇 Next tweet 👆 Start

@marknca tweeted at 29-Nov-2022, 04:51
I had a great chat Robyn Bresnahan on CBC Ottawa Morning on 08-Nov-2022 to try and explain Mastodon. Listen to our discussion 👇
Twitter is currently a dumpster fire heading over a cliff.
In what can only be described as a chaotic festival of irrational whimsy, the site has seen drastic content and technical changes in the past few weeks.
This has people looking for alternatives.
There aren’t any.
…at least there aren’t any other Twitter’s out there. There are other social media platforms that have made different decisions over time, build up different cultures, and deliver fundamentally different experiences.
Enter Mastodon.
In 2016 Eugen Rochko formed the open source project and the organization of volunteers to help develop and promote it. Over time, that became Mastodon gGmbH, a non-profit registered in Germany.
The idea behind Mastodon is simple: a social media platform that centres around user choice.
With Twitter (and Facebook, Instagram, LinkedIn, etc.), there’s just “Twitter.” Everyone’s tweets are on the same platform and a series of algorithms help determine what content you see.
You can influence those algorithmic decisions by following different accounts, engaging with specific content, and using the platform more and more.
You can always go find specific content from accounts on Twitter, but you’ll have to work for it.
With Mastodon, those choices are far more explicit. There is no algorithm promoting content based on data analysis.
There isn’t one “Mastodon” system. The network is made up of a number of servers. Each server is it’s own island. Those islands can
You—the user—select a server as your “home base.” This decision will influence your experience significantly.
Pick a server that has a theme or primary topic that interests you…or one of the many generic servers out there.
Once you’re set up on a server, you login and see…nothing.
This wall of nothingness is one of the biggest blockers for new users joining Mastodon. It’s also one of the systems biggest strengths.
As a Mastodon user, you can follow the content from almost any account on any server. If you choose to follow an account, you’re going to start to see their posts (or toots 🤦) on home timeline.
Over time, you will follow more and more accounts, slowly building out the timeline of content you see. You can also view the local timeline for the server (posts from everyone on that server) or the federated, which is a combination of other servers local timelines.
If you’re still looking for more content, you can search by hashtag (#), but not by the content of the post. That’s an important limitation and another place where users struggle as they move to this platform.
To summarize what’s available for content:
When you toot (post, it’s a post…🤦), accounts who follow you will see it. If one of those accounts boosts (essentially a share/retweet) that post, their followers will see it. It will also be added to the servers local timeline and any servers that have your server in their federated timeline.
Notice what’s missing? There is no algorithm promoting content or showing it to people to gauge their reactions.
This is both an advantage and disadvantage for Mastodon. If you know the goals of the system, it makes sense.
You—the user—are in control of what you see.
This is also why there’s a very strong content warning culture within the platform.
If you think of Mastodon as somewhere between Twitter and a forum or chat platform like Discord, it makes a lot more sense.
If you are moving to Mastodon thinking it’s a Twitter replacement, you’re going to be frustrated. Very frustrated…and lonely.
Mastodon can help you maintain and build a community online. But it is its own unique animal. Keep that in mind as your explore and reach out.
I spoke with Robyn Bresnahan on CBC Ottawa Morning on 06-Oct-2022 about the announcement.
We all have that drawer or box of cables and chargers for different devices. Which charger goes with which device? Who knows. You’ll figure it out in five or six tries.
USB launched in 1996 with two connectors. The “A” rectangle we all associate with USB, the “B” square-ish end. Mini-A and mini-B followed in 1998. 2001 brought USB 2.0 with increased speeds and new connection options mini-AB, micro-A, micro-B, and micro-AB.
Each made sense in turn. They were smaller and better suited for phones and other portable devices.
![USB connectors and cable ends in order [ A B 3.0 Mini, Micro, Type C, Micro B ] A sampling of USB connector types](https://markn.ca/content/images/2024/05/some-of-the-usb-connectors.jpg)
But they were the start of a very real problem. The “U” in USB quickly changed meaning from universal to ungainly.
The EU passed a revision to the Radio Equipment Directive that includes a couple small changes that will have a very real and very large impact.
The revision pass with a 602 to 13 vote (with 8 abstaining) and can be summarized as:
The biggest and most immediate impact will be with Apple devices using the proprietary lightning connector, but this directive is broader than that.
As much frustration as some users are expressing about Apple’s lightning connector, it was a big win for users when introduced in 2012. Starting with the iPhone 5, iPod Nano, iPod Touch, and 4th generation iPad, all Apple devices and accessories would use one simple connector.
Named “Lightning” it provided the same speed as the USB 2.0 spec of the time, 480 Mbps. Lightning is a perfect marketing name. The new connector didn’t provide any transfer speed boosts and, in fact, added a layer that Apple controlled in direct the ecosystem around it’s devices.
For users, the new connector did solve a very real problem. It was—and is—very simple to actually plug in.
We’ve all had that frustration with USB-A connectors where you just can’t get it oriented in the correct direction. It often takes 3 tries to pick the right orientation from the 2 choices. Lightning works in either orientation.
Click…and you’re connected.
The ripple effect that standardizing the connector had was that the user could be confident that any accessory purchases would properly connect with any of these devices and their successors for the foreseeable future.
Immediately, charging became a lot easier.
Investing in a new charger was easier to justify when you knew it was going to work for a few years. Up until this, APple and others had a bad habit of changing connector types randomly. Each time that happened, you need to either buy adapters or new accessories.
In 2014, the USB-C connector was finalized. It was formally adopted by the standard-setting body, International Electrotecnical Commission (IEC) in 2016.
This new connector type fixed the, “Which was does this connect?” problem is a small form factor and with enough capacity—pins—in the connector to allow for complex data transfers. It was, and still is, a well designed solution that solves most—if not all—physical connectivity issues.
While a bit more technically complex, this is a straight forward physical connection standard that has all of the pieces required to put the universal back in USB.
Most Android device vendors started switching over to the physical USB-C connector. Other electronics have slowing starting moving over as new versions hit the market.
It wasn’t until 2018, that Apple released it’s first non-laptop with a physical USB-C connection, the iPad Pro.
A quick aside, you’ll notice that I’ve been using the term “physical connector” and it’s variants throughout this post. That’s because the prootcol to run power and data over USB is different than the physical connectors.
The connectors impact which protocols can be run but not nearly as much as other factors. Fair warning, USB naming conventions are an absolutely disaster. I’ll spare you the history and tears. The easiest way to think of it is:
As you move through the generations, each version can typically provide all of the features of the previous generations. Right now and for the foreseeable future, we’re using USB 5—40 Gbps.
This means that while physically two devices might connect (yay USB-C!), they may not be able to communicate fast enough to actually work. If you try to connect an external dock to your laptop, it might not have enough bandwidth to talk to all of the devices you connect to it.
Don’t even get me started on [Thunderbolt](https://en.wikipedia.org/wiki/Thunderbolt_(interface), which is another protocol that works over USB-C physical connectors but requires different device support and cables!
Making this even more complicated is power delivery over USB-C. Thankfully, adding clarity here is part of the EU directive. USB-C is capable of delivering up to 240W of power. That should be enough to run a modern laptop, especially with efforts there to reduce power consumption.
The latest gaming laptop from Razor comes with a 230W power supply. That is pushing the limits of the USB-C spec, but it’s also the extreme.
The top end 16" MacBook Pro, for contrast, comes with a 160W adapter. There should be enough headroom in the USB-C spec for a while. And there might be the opportunity to increase the current limit without changing the physical connector.
The EU’s move is going to have worldwide implications. No company wants to make multiple versions of it’s product with slightly different connectors. The EU market has enough weight to push this change globally…and that’s a good thing.
USB-C physical connectors will simplify a lot of things for users. There will be some confusion over the speed and power capacity of the protocol used by various devices but that’s solvable with simple labelled and clear requirements.
While Apple was resistant to this move, they have been moving to USB-C connectors for more and more of their iPad line-up. They need the added speed and power for accessories. The delay helped them wring out more value from their investment and to start scaling up the supply chain required to deliver millions of units with USB-C.
Most people won’t really notice as this directive comes into effect. It’s a smart move whose impact will be realized when you look back in five years and say, “Oh, I remember that connector…haven’t used it in years.”
I spoke with Hallie Cotnam on CBC Ottawa Morning on 26-Aug-2022 about this issue.
The promise of the internet is that it can connect us all. It’s also a threat.
There are wonderful use cases of technology. Especially over the past few years with various lockdowns and travel restrictions. Being able to work remotely, connected with friends and family, and to stay entertained and informed has been invaluable.
The negative stories range from frustration to outright horror. Recently, Canadian journalist Raisa Patel had an experience that borders on outright horror.
Raisa shares the start of her story in this Twitter thread.
🧵Long thread, but a summary of the past day: Yesterday, @angelynsayshi, @_jennamoon & I were targeted in the latest violent, misogynistic, racist campaign against Canadian journalists. We have decided to post it without censoring anything. TW for violence, racism, sexual assault pic.twitter.com/Do8ndP526v— Raisa Patel (@R_SPatel) August 11, 2022
Making matters worse is that when Raisa turned to the police for help, bureaucracy and poor experiences kicked in…
My police reporting update is that I spent a total of four hours on hold with the Ottawa Police Service today across two different attempts. On one call I was not able to speak to anyone after 1.5 hours on the line. Same result after spending 2.5 hours waiting on another attempt.— Raisa Patel (@R_SPatel) August 12, 2022
This experience is what prompted my conversation with Hallie. Is this common? Why are these hurdles in place? Is there actually any point in reporting these types of crime to police?
Why I’m qualified to write about this issue? I did my graduate work in digital forensics and worked as an investigator for several years. I continue to research and apply digital forensics techniques in my work. While my focus isn’t online harassment, the technological aspects are the same.
The social impacts are quite different which is why I refer to other experts in that space down below.
To identify and then track down a suspect in a case like this, a technical trail must be followed.
Using Ms. Patel’s case as an example, we start with an email. From the screenshots posted, that email came from an email account with the provider Proton.
Email providers will have differing amounts of information on their account holders. Proton is a privacy-focused provider offering free and paid accounts. By design, they store a minimal amount of information about their users.
For a free account, only a username and password is required. Paid accounts require some level of billing information. There’s a near endless list of email providers out there. When a suspect is looking to abuse the system, Proton’s design doesn’t actually impact the case much. Other providers might have more information, but the odds are good that it’s false.
Why would a suspect put in their actual name when “Sam Samington” or another made up name would work just as well?
The victim already has the actual email, so what we’re looking for from the provider is an IP address or other information that might lead to the person behind the malicious account.
Assuming that information is available. The IP address leads us to the next step in the trail.
If you’ve been on YouTube in the last year, you’ve probably seen an ad for virtual private network or VPN . While positioned as privacy-protecting and an easy way to get access to geo-locked content, what a VPN really does it route all of your internet requests through the providers servers.
It’s a misdirect.
In a cybercrime situation, that means the IP address linked to the account at the email provider probably belongs to a VPN provider. That’s another round of requests and hopefully some account information for our investigation. It’ll depend on the provider and information associated with the account if anything useful comes from this step.
Odds a good—in a bad way—that there’s only another IP address at this step…that might lead to an internet service provider or ISP.
At this (hopefully) final step, the investigation might actually lead to an account information with accurate information linking all of this activity to an actual individual.
…maybe.
An ISP or mobile provider account is actually just a link to who paid for the connection to the internet. The trail we’ve compiled so far has linked from a the offending email to a VPN to an ISP to an account.
This may or may not link to a specific device. For a mobile account, the technical information can more easily be linked to a specific device. It’s done that way for a billing purposes.
For a home or residential internet connection, that will link to a router.
This is the device that connects to everything in the residence to the internet. If there are several or several dozen devices in the house, which one sent the email?
This is when the technical investigation needs to move to the forensic examination of the all of the potential devices. The hope here is evidence that is sufficient to link a person to the action.
Until that happens, you don’t know which person actually sent the email. If could be anyone with access to that connection…which could include more than just the people authorized to access that connection.
That lack of certainty makes charging someone and being successful in their prosecution very difficult.
We’ve followed a typical technical trail for cases like this.
It’s complicated, involves multiple systems, lots of opportunity for data to be lost, and requires a lot of technical expertise to properly follow this trail. An even higher level of expertise is required to follow the trail with enough rigour and process to testify with credibility in court.
And that’s where the other set of challenge lie; the law.
The internet connects our planet. That’s often just accepted but really comes to light in situations like this. Imagine for a second that each of these steps lands the investigation in a different country.
For Raisa, this is what’s happening.
The email was sent through the Proton service. This service is run by a company in Switzerland and they follow Swiss law. That includes extremely strong privacy protections for individuals and depending on the situation, the European General Data Protection Regulation (GDPR) could also apply .
Here’s a great call out of the Swiss data protections by data hosting provider Artmotion. That’s easier to dig through than the actual legal statutes.
In order to get whatever information Proton has about the suspect, Canadian law enforcement must make a request in Switzerland under Swiss land. IANAL, but the long and short of it is that this is another layer of paperwork and another set of standards to apply.
Generally, at each step in the technical trail the following questions must be answered;
In our scenario so far, there’s at least two (Canada and Switzerland) and most likely four jurisdictions involved. How did we get to five?
Numbers 3—5 are assumptions, but given the pattern in the cases I’ve seen over the years they are most likely true.
That’s a lot.
The technical issues and bureaucracy are not insurmountable. They require patient and commitment to navigate, but it can be done.
A bigger, over-arching issue is the lack of available resources. This is technical law enforcement work. That’s a skillset in high demand both from a recruiting and deployment perspective.
Every country has this challenge. Here are a sample set of reporting from Canada, the USA, and Ireland. Many more are just an online search away. No one has enough resources to address crimes like this.
There is a finite set of resources that can be deployed to help solve crimes with an online component. We need to have a discussion in our communities as to what those resources should be prioritized for.
Right now, we’re not.
Occasionally we talk about harassment and cyberbullying. Child endangerment comes up often, but often as a club to push through laws that weaken encryption and reduce everyone’s digital security. Not to actually address long term protections.
Cybercrimes with financial impacts are in the news frequently, but only a few times a year do we see any successful legal actions being taken. It’s not from a lack of desire or bad intentions. It’s a systematic issue that’s only being addressed in a fragments.
Are victims like Raisa Patel, Angelyn Francis, Jenna Moon, and countless others left by the way side?
No. There are steps you can take to help reduce your online risk and the potential impacts from issues like this.
These steps are better laid out by experts in this space like Leigh Honeywell. Leigh is the CEO of Tall Poppy. Tall Poppy offers a set of tools from an amazing team that help “stop online harassment from escalating to hacking.” Organizations can—and should—add this to their benefits package to help their staff be safe and secure online.
No, I don’t work for Tall Poppy but I’m a huge fan of their work. This is a critical issue and they’re making it easier for people to get help while building a sustainable business. That’s to be admired
If you are responding to harassment like this and in immediate crisis, I’ll refer to Tall Poppy’s advice;

From Tall Poppy’s site, available https://www.tallpoppy.com/resources
In you’re not in immediate crisis, you should always take screenshots and document everything that’s happening. This record can be invaluable during any of the possible process that follow.
There are several in-depth guides available freely online that list resources, apps, and allies available to help you. Here are the ones I’d recommend;
Each of these will help you understand your options at each step of the way. If you’re not dealing with harassment currently, these are a lot of preventative steps you can take to help reduce your risk.
Take the time to read through these guides and implement their advice.
If you’re interested in learning more about these issues and the challenges in addressing them. These links will help;
Researching for this segment was frustrating. Both because of the rate at which online harassment happens and because it’s been hard to find reputable resources specifically for harassment. Especially on the law enforcement side.
There’s a lot out there aimed at cyberbullying education and prevention but not much for adults or about more serious crimes like harassment, hate speech, and direct threats made online.
This issue needs to be discussed out in the open. It impacts our communities and our friends. We need solutions that span technology, law enforcement, social media, and many more areas.
We need solutions that can have a real impact. We’ll only get there if we acknowledge the problem and more forward together.
]]>Ping me, if you spot a problem or if something is missing @marknca.
As always, the event site is the ultimate source of truth (though I try to provide more context here). The site is available at https://reinvent.awsevents.com
Thank you to A Cloud Guru for cross-publishing this guide again this year. The continued support is always appreciated!
My goal with this guide is simple: help you get the most out of AWS re:Invent.
It started with just a few tips in the early days. As the show grew, and grew, and grew, so did the guide. An entire week in Las Vegas is tricky any time. At a conference with hundreds of sessions and with a mountain of new things to learn? It’s a real challenge.
I’ve been to a lot of conferences over the years and to every re:Invent (with the slight exception-ish of attending only virtually in 2021). The guide is my way of giving back to the community and helping myself to keep it all straight!
This is probably the longest running guide to AWS re:Invent. I’ve been writing this guide since 2015. The eighth version will follow closely to the layout of the last couple years as a lot of people have found it useful.
For nostalgia’s shake, here are the previous guides:
This is the most important part of the whole guide. Honestly. The code of conduct sets the baseline for community expectations. AWS re:Invent is all about learning and having a shared experience. The key to that is making sure that environment is inviting, safe, and supportive.
The AWS Community Codes of Conduct applies to both the physical and digital space surrounding AWS re:Invent…and to all other AWS events.
I would say that it’s all common sense and following the golden rule but with tens of thousands of people from around the world sharing the experience for the week, it’s best to call this out explicitly.
READ THE CODE OF CONDUCT AND LIVE BY IT.
If you encounter an issue or problem that runs counter to the code or if you in any way feel unsafe, please reach out for help. Amazon Security can be reached at [email protected] for live events. In addition, any one of the on-site staff or security can help resolve any issues.
No system is perfect but this is a great start at creating a safe, inclusive community at this fantastic event.
I’m an AWS Community Hero, a long time AWS re:Invent attendee and a big fan of the show.
I teach a few courses over on the A Cloud Guru/Pluralsight platform (AWS Business Essentials, Introduction to the AWS Marketplace, and Mastering the AWS Well-Architected Framework), tweet regularly, and post a lot here on my site.
Any gathering in 2022 needs health measures in place to be successful. AWS has posted their requirements on the event site. As with everything, these measures may change as we get closer to the show.
The key aspects are:
Science and recent history show that masking up is a very good idea in shared spaces. Yes, even if you are fully vaccinated. This requirement may change leading up to the event.
As of Monday, 08-Sep-2022, AWS has removed the vaccination requirement for this event. This is inline with updated guidance for the US Center for Disease Control (CDC)
If masking and other health measures are optional, please respect everyone’s individual choice…even if you don’t agree with it.
Remember the code of conduct? Respect is a massive part of it. Response to COVID-19 is a sensitive subject for some, please be aware of that.
Registration for AWS re:Invent 2022 is open now.
Full badges go for $1,799 USD. Virtual access (live streams for the keynotes and the leadership sessions) is free.
This show is big enough that AWS actually calls the conference space a “campus.” Honestly, I don’t blame them. Here’s a look at the campus 👇
Current AWS re:Invent 2022 campus (via AWS)
When it comes to hotels, the key is to understand what you want from an experience point of view. If you want to be in the thick of the action, the yellow/amber properties are where you want to stay. These are locations where AWS is hosting the event itself.
The light blue properties are where AWS has negotiated room blocks and discounts. They also provide transportation between these properties and the locations where the event is happening.
If you’ve never been to Vegas, there isn’t really a “quiet” place listed here…with the possible exception of the Signature at MGM. This location is behind the MGM Grand and is a bit quieter than the rest as it’s only a hotel. No casino or theatre.
To learn more about each property, here are the direct links. Remember to book through the AWS re:Invent registration process (or afterwards through the event site) in order to get the discounted rates.
Venues included in campus but not hosting events;
Mandalay Bay and the MGM Grand are the outliers when it comes to hosting content. Most of the action is down in the Venetian, Palazzo, Wynn, Encore, and Caesars Forum area. If you’re looking to reduce your walking times (if only slightly), that may be the best choice.
No matter where you choice to lay your head at the end of the day, you’re going to be moving between venues…a lot. That means there’s no wrong choice when it comes to hotel.
Making sure that you’ve got a room at a rate you can manage is the most important thing at this point.
Not too much to say about attending virtually. This year’s show is back to an in-person focus which means that the bare minimum is available during the week remotely.
Given the ongoing and ever changing situation COVID-wise, if you aren’t comfortable or cannot attend in person, there isn’t really an option.
You can watch the keynotes and leadership sessions on the stream and follow along on social. That’s about it.
AWS knows re:Invent is a lot to wrap your head around. The past few years they have brought us the show, “How to re:Invent.”
This show helps break down specific aspects of the event. It’s definitely worth watching.
All six episodes are up for viewing now on the event site.
Reserved seating opened on 11-Oct-2022.
In-person attendees will want to take advantage of the reserved seating system in place for this years event. It opened Tuesday, 11-Oct-2022 and allows you to lock-in a seat in your preferred sessions…assuming you can get a spot.
Fingers crossed 🤞 that the reservation system works well this year. It’s always been a pain to deal with. You might not think it’s an issue but the most popular sessions can fill up very quickly when reserved seating opens up and any issues with the system can reduce your overall experience.
The good news? Any issues that pop up are quickly fixed and after the initial rush, it’s really a non-issue.
Session times are mainly scheduled now, but more will be added as we get closer to the show. Remember, even with reserved seating, AWS usually holds back some seating for walk ups on the day.
Also, all breakout sessions (not chalk talks, workshops, or other session types) are recorded and posted to YouTube shortly after re:Invent.
Keep that in mind as you build out your week.
It’s doesn’t suck now. That’s about all you need to know. 🤣
For years the mobile app was super frustrating. Now it’s improved to the point where it’s a handy companion for your week. Find the latest sessions, maps, timing of transportation, ways to connect with the community, and more.
Make sure you’ve downloaded it (iOS / Android) and set it up before heading to the event.
The official agenda is up on the event site. It’s a handy way to see the high level flow of what happens throughout the week.
Sunday kicks things off. It’s a great day to get oriented and get your badge and swag before the lines get busy. The Help Desks and AWS Training and Certification Information Booth are both open at noon. The only official event is the Midnight Madness. That runs 10:00pm to midnight.
Monday is jam packed with content. Everything is open logistics-wise. Bootcamps and breakout sessions start in the morning. Of note, the Welcome Reception runs from 4:00pm to 7:00pm in The Venetian.
Tuesday, Wednesday, and Thursday are “standard” days, if that’s even a thing at this show. Content all day and something unique each evening. Tuesday and Wednesday have community events in after the content. Thursday brings back the legendary re:Play party.
Friday is a half day. Content runs up until 12:30pm which is the official close of the show.
Phew, there is a lot going on!
The last few re:Invent’s have expanded to five keynotes. Yup, five. That’s crazy.
The first keynote is usually on the Monday and comes as part of “partner day.” This year, Ruba Borno (VP, Worldwide Channels and Alliances, AWS) will deliver this one. This keynote is focused on the AWS Partner Network and various programs and efforts focused on AWS partners. If you’re not a partner, you can safely skip this one. (The 2021 partner keynote.)
Adam Selipsky (CEO, AWS) will deliver the “main” keynote this year. This will give us a broad business update, some key launches, and important customer use cases. (Here’s Adam’s keynote from 2021.)
Swami Sivasubramanian (VP, Amazon Machine Learning) will give the ML and database keynote. This one has gotten steadily more interesting each year. There’s always some cool use cases and new functionality announced. (The ML/database keynote from 2021.)
Monday Night Live became Tuesday Night Live and finally “Peter’s Keynote.” The eponymous Peter DeSantis (SVP, AWS Utility Computing and Apps) is back with this deep dive into how AWS works. Don’t expect any new features but each time Peter takes the stage we learn more about how AWS achieves the scale they do. There’s always an insiders look at some of the technology innovations that AWS has created and how they impact AWS as an environment. (Peter delivering the 2021 keynote.)
Finally, Dr. Werner Vogels (CTO, Amazon) will deliver a keynote focused on builders. Werner always has a strong call to action to help everyone build better in the cloud. This keynote usually features more technical launches and new services. (The 2021 keynote from Werner.)
The keynotes outline the structure for week’s content. I strongly recommend watching at least Adam, Werner, and Peter’s.
The leadership sessions are essentially mini-keynotes. They are delivered by a senior AWS leader and focus on a specific area of the business. The sessions this year at re:Invent are:
This is a very similar set of sessions to last year. Some of the leaders have changed and some of the areas have either merged or grown enough to be called out on their own. You can view last year’s leadership sessions on the AWS YouTube channel.
Leaderships sessions generally follow the same format. A quick review of the biggest announcements from the past year for this area of focus. They a new announcements or launches and they wrap with some sort of call to action.
Announcements in the leadership sessions are ones that aren’t quite big enough to make the main keynotes. But for those with an interest in the topic, they can be just as important.
Now an annual tradition, a collection of AWS Heroes have created guides to help you navigate the sessions at AWS re:Invent 2022. Each guide follows a theme and helps you find a great set of sessions to dive deeper on that theme.
Attending AWS #reInvent? Don't miss the #AWSCommunity session track, led by AWS Heroes!
⭐️ Favorite in the re:Invent session guide now, & grab a seat when reserved seating launches October 11, 10AM PDT: https://t.co/ovcDIf0ek4 pic.twitter.com/iIul1KR3k5— Ross Barich (@rossbarich) October 6, 2022
This year, the available guides are:
There are also some industry and partner guides available by other authors (AWS folks, APN partners, etc.)
Outside of the official guides 👆, a number of people inside of AWS and out have compiled some great guides to session at AWS re:Invent. Here are a few I’ve found useful:
Bootcamps are half or full day session that provide a deep dive, hands on experience.
There are three main types of bootcamps; exam prep, technical, and partner. Each of them provides a different experience.
These camps are typically offered for an additional fee. That can be a very smart investment, especially if you’re making that last push for an exam or need more guidance on a specific technical area.
New-ish this year is “gamified learning”. This is a collection of activities that have been popular throughout the year at re:Invent. This includes several AWS GameDays and Jams.
This year, the options are:
These are all hands-on challenges where you work in teams to solve problems together. Each problem is scored and there’s a nice friendly competition vibe to the whole event. It’s a great way to learn and apply some critical skills.
There’s just enough pressure to pump teams up and push you further in order to beat out the competition!
Every year AWS releases a tidal wave of new functionality leading up to and during AWS re:Invent. It’s a foundational piece of their culture. What does that mean for you?
For the three weeks leading up to the show and during the week, you’re going to get overwhelmed with the announcements. Seriously, it’s a crazy amount of effort to keep up.
Don’t believe me? Here’s a chart of the average number of AWS announcements per week over the years they’ve held AWS re:Invent.
In the AWS re:Invent “window” (the 3 weeks before and the week of), there is a 245% increase in the volume of announcements. That means there’s an average of 56 announcements each week during the window.
No wonder I always joking with Jeff Barr about his keyboard usage!
how many keyboards worn through? 😉🤣— Mark Nunnikhoven (@marknca) November 28, 2021
You are not going to be able to keep up with all of these announcements. That’s ok.
My suggestion? In your notes, have a dedicated space for “look up later.” Anytime you hear about a new announcement, cool session, or see a great write-up/discussion, add the URL or reference to your note.
After the show, set aside a little time each day to review one or two announcements until you’ve caught up.
AWS is introducing a new program this year called, PeerTalk. The call to action is, “Expand your mind and your network.”
From what I can tell, this program is going to leverage your attendee profile to help you better connect with people in the community. Once you’ve identified someone or a couple people you’d like to connect with, you can chat online or request an in-person meeting.
If you immediately got a little nervous, the next part should help put you at ease, “…request in-person meetings of 2-4 people in designated public spaces located across the campus.”
This program was just announced and more details will be coming soon from AWS.
AWS re:Invent is when sponsors—APN partners—put their best foot forward. Partners will pop up all over the event. Some sponsor lounges, pub crawls, the mobile app, and more.
Where you will see most of the sponsors is in the Expo. In addition to the sponsor booths, there is also a massive AWS presence in the hall. It’s so big, they actually call is the “AWS Village” now.
This presence includes the AWS Jam Lounge (an area for short hands-on experiences), the Community Developer Lounge, the AWS DeepRacer League arena, Builder’s Fair, and more.
It’s well worth checking out these areas and speaking with some of the sponsors while you’re at it. The welcome reception on Monday, 28-Nov (4:00—7:00pm pacific) is held in the Expo hall and a great opportunity to get some snacks while touring the area.
In additional to the Welcome Reception, one of teh best ways to see the Expo is the un-AWS-official Nature Walk hosted by Corey Quinn as part of his re:Quinnevent 2022. That’s going down on Wednesday, 30-Nov (1:00pm pacific).
If you’re curious as to who will be in the Expo, you can view the complete list of sponsors on the event site.
AWS re:Invent is a hard show to attend. There is too much content, spread across too many properties, and too much going on in the evenings. And that’s just the primary AWS approach. The sponsoring APN companies have a parade of events that they host as well.
You are not going to be able to take it all in. You’re not even going to get to 20% of it all. It’s just not possible.
Here are some concrete tips to manage each day to get the most out of the conference and everything that surrounds it.
Every year there a bunch of spaces throughout the campus where you can recharge and grab a snack, see a demo of a cool project, chat with some folks around a whiteboard, and more.
This year the event team has finally listed most of these spaces in one place, the Public Space Activations page on the event site. Check ’em out!
re:Play is the legendary party that goes down Thursday, 01-Dec from 7:30pm—12:00am pacific.
It’s hard to describe this party. It’s really something you need to experience to understand.
AWS does a great job of making sure there’s something for everyone. There’s always a couple of amazing, world class music acts. Werner will typically announce the headliner at the end of his keynote that morning.
In addition to that, there’s a ton of food and treats. Lots of games to play. And plays to step away from the sensory overload.
Even if you’re not one for parties, it’s worth going to re:Play just to check it out. You might be surprised about how much fun you have!
One this AWS re:Invent nails year after year is a fun and inviting atmosphere. It manages to strike the balance between being jam packed with new things to learn and providing environments to connect with the community.
Hopefully this guide will help you get the most out of your week!
Please check back regularly for updates! Ping me, if you spot a problem or if something is missing @marknca.
]]>I spoke with Hallie Cotnam on CBC Ottawa Morning on 07-Aug-2022 about this issue.
On Friday, 08-Jul-2022, the Rogers network suffered a massive outage. Rogers is a major ISP and cellular provider in Canada. Just how massive might surprise anyone not living here. They have 35% of the national market share for mobile connections and 30% of all Canadian home internet connections.
On top of that, they have 2.25 million retail internet customers and another 7,000 enterprise customers.
Over a third of the country is online because of Rogers. Over a third of the country went dark for the entire day.
Much has been made of the outage (just check the references section at the end of this post) but when you wade through all of the opinions, it appears that the issue was the result of one mistake.
It’s the type of mistake that keeps network engineers and operations teams up at night. A simple misconfiguration that threads the wrong needle and is extremely difficult to rollback.
Cloudflare has a great summary of the issue as seen from the internet.

Cloudflare BGP data showing Rogers network drop off the internet on the day of the outage, 08-Jul-2022
👆 that big cliff? That’s not good.
Most people will never see the inside of a data centre, including a lot of that network’s engineers. Most of the work is done remotely. That requires a secure access path into systems that can update the network resources in question.
Care to guess where simple mistakes escalate out of control?
If you said, “Remote access and update configurations?”, you win! …and by that, we all lost on July 8th.
Someone, somewhere made a simple mistake that apparently closed much needed update pathways and took most of the network offline.
How? These types of changes usually have both technical and process guardrails in place but they aren’t infallible. Mistakes still make it to production. It happens…thankfully rarely.
The good news? The root cause of the issue was probably located quickly.
The bad news? The issue had already taken enough of the network offline that bringing it back up presented its own, unique challenge.
While this network outage lasted almost 17 hours. All indications seem to point to the original issue being resolved reasonably quickly and then the rest of the time spent unravelling the nightmare of legacy systems.
Rogers took a lot of heat for this outage. Their stock drop 1.17% on the day. But while it’s easy to blame them, the reason the outage was so long was written in a thirty year build up of technical debt, business incentives, and the geographical challenges of the Canadian market.
Everyone impacted has called for change. The Government called Rogers, Bell, and others to the carpet to figure out how to prevent another outage this significant. Those efforts won’t drive any significant changes.
Canada is just too big and our population is too small to have a diverse set of telecommunications providers. That’s ok. We have reasonable—if expensive—coverage today. We need significantly better coverage in the Territories and some rural areas but most Canadians have access to reasonably fast internet.
Do we need change in this sector? Yes.
Lower costs would help. Regulation that prevents bundling of multiple services (discounts for more services from one provider) which forces Canadians to put all of eggs in one basket. Subsidized access to rural and northern areas.
But at the end of the day, this massive outage was from a mistake. A mistake that happened despite technical and process safeguards. Why? Because 💩 happens. 🤷
A Twitter thread from me on the day with my initial reactions;
as the @rogers outage rolls towards hour 18, the msg on their website keeps getting more empathetic
"catastrophic" isn't an exaggeration here
nationwide networks are complex. lots of opportunities for cascade faults that require rebuilds#nointernet #rogersoutage 🧵 pic.twitter.com/biCsycpr1E— Mark Nunnikhoven (@marknca) July 9, 2022
There’s a million different ways to build in the cloud. Ok, not a million, but there’s a lot. For the past few years, the serverless movement has been growing but what, exactly, is serverless?
Turns out, we can’t agree on what it is. Or at least we used to agree, but don’t now?
Corey Quinn wrote a fantastic piece calling out Amazon Aurora Serverless v2 for not actually being serverless. He points out that even doing nothing, the service will run you about $43.20 USD each month.
That violates a key service tenant; don’t pay for idle resources.
Corey’s right.
Vlad Ionesco replied with an epic Twitter thread that basically sums as, labels don’t matter, the solution does.
Rant: can we please stop pretending that serverless in 2022 is the same thing as serverless in 2016?
We're intellectually masturbating about what "serverless" means and we're confusing people, all while we're like on serverless v3!
Of course v3 products don't fit in v1 boxes!— Vlad Ionescu (he/him) (@iamvlaaaaaaad) June 21, 2022
Vlad’s right.
The community has tried to define serverless for years. The closest I could come up with was this dictionary-esque entry from 2018 👇;

As the community has grown, the technologies matured, and the use cases expanded, the definition of serverless has gone from hazy too absent.
The original vision for serverless was systems that are event-driven. As a request is processed, the system kicks into gear and then lies dormant—with no idle costs—waiting for the next event.
This grew out of AWS Lambda’s popularity as a compute platform. You pay for AWS Lambda based on the amount of compute you use for a specific amount of time measured in milliseconds.
If you’re not processing an event, you’re not paying. Linked with other services that match that pricing and operational model, it’s very possible to have a solution that’s directly aligned with the business value it delivers.
That’s a good thing.
Noted cloud bard Forrest Brazeal sums it up nicely in this mini-review;
“Serverless” compute services ranked from most serverless to least serverless, in my opinion:
1. AWS Lambda
2. Google Cloud Run
3. AWS Fargate
4. AKS virtual nodes— Forrest Brazeal (@forrestbrazeal) June 23, 2022
But out of this push, teams finally realized that design choices directly impacted operational efforts. You can make smarter design choices and lower the effort required to keep the lights on.
This mirrored the growth of the observability movement (another term starting to dilute) as teams were trying to build more resilient systems and avoid being paged at 2 AM to fix outages.
What Corey has noticed and rightfully called out is that builders now need to see the label of serverless and then figure out just how serverless something is.
You can’t just assume a service fits your idea of serverless.
Frustrating? Yeah, a bit.
A good sign? 100%.
The spectrum of serverless offerings is a sign that the more builders are seeking these more efficient solutions. The applicable use cases are expanding. People are building better.
…just make sure you aren’t going to get hit with an unexpectedly large bill (which is a rant for another day).
]]>I spoke with Hallie Cotnam on CBC Ottawa Morning on 20-Jun-2022 about this issue.
Recently, an AI ethics researcher from Google was placed on administrative leave after publicly claiming that Google’s LaMDA system was sentient. A claim that has been denounced by Google and others in the AI community.
It’s a bold claim and there simply isn’t enough evidence to support it.
Is Google LaMDA sentient? No.
If it’s not an actual intelligence, what is it? LaMDA actually stands for Language Model for Dialogue Applications. This is a system that is designed to hold a conversation in a natural manner.
Sundar Pichai, CEO of Google and Alphabet, revealed the latest version at Google I/O 2022 and hit on three key aspects of the system. He phrased it as the system being able to;
These three areas of focus allow the system to present as if it’s having an intelligent conversation. In reality, it’s using all of it’s vast inputs—Google Search, YouTube, Google Maps, Google Books, etc.—to find groups of relevant responses and create something that is plausible.
If you’re asking yourself, “Why would Google create such a system?” The answer is actually very straight forward; efficiency.
Digital systems are often the first interface for many businesses (through online chat or phone calls) and a lot of tools like Google Home. We’ve all had that frustrating interactive voice response (IVR) experience when calling a big company’s customer support…
“Hello and welcome to BigCorp. What can I help you with today?”, 🤖
“Customer service”, 😀
“I heard, ‘Sales.’ Is that correct?”, 🤖
“No, I want customer service”, 😀
“Oh, I’m sorry that I misheard you. Forwarding you to ‘Sales’”, 🤖
👆 That’s the type of interaction—whether voice or chat—that LaMDA aims to get rid of forever. The results so far are promising.
There are definitely issues around the ethics of using a system like this. We won’t dive into them here but those discussions need to be had in our communities.
At a minimum, these systems would be forced to identify as digital. You should always know if you’re talking to a digital system.
But overall, LaMDA should be a big win for the most use cases.
I spoke with Robyn Bresnahan on CBC Ottawa Morning on 07-Apr-2022 about the announcement and it’s implications.
now that everyone is asking…
yes, we’ve been working on an edit feature since last year!
no, we didn’t get the idea from a poll 😉
we're kicking off testing within @TwitterBlue Labs in the coming months to learn what works, what doesn’t, and what’s possible.— Twitter Comms (@TwitterComms) April 5, 2022
This statement from Twitter comes as no surprise. Shock is probably a better word. The “edit” button has long been requested with many rumours swirling about it.
While it feels like a simple change, behind the scenes there’s a ton of engineering work that needs to be done. Twitter was designed as a write once, read many system. Changing that fundamental truth won’t be easy.
It’s like building a one story house and then deciding you actually want a two story home. It’s possible, but takes a lot of time and work to do properly.
So Twitter can make this change but it’ll take time…and has taken a long time.
In addition to the engineering challenges, there are safety and misinformation concerns. Lots of them.
2/ Edit has been the most requested Twitter feature for many years. People want to be able to fix (sometimes embarrassing) mistakes, typos and hot takes in the moment. They currently work around this by deleting and tweeting again.— Jay Sullivan (@jaysullivan) April 5, 2022
This thread 👆 from Jay Sullivan dives into some of the issues in this thread.
Think of it this way. Without safeguards, if a country’s leader replies to a tweet with, “I agree with this” and then the original tweet is edited to something completely different, misinformation will spread. Yes, corrections can be made but corrections rarely get the same attention as the original.
This is why Twitter is working on and discussing what safeguards to add. A “blank cheque” edit just won’t work. It would upend what little credibility is left in the Twitterverse.
]]>If you manage to push the hype to the side for a minute, you might be able to glimpse the start of a useful set of technologies.
The negative blowback hitting NFTs and this space right now (and rightfully so) is often due to a combination of misunderstand the technology and a misapplication of the technology.
In the earliest days of the blockchain, the hype claimed it would solve all of our problems. On top of the multitude of failed projects, there were scams left, right, and center.
But as a technology, blockchains have their place.
The core value a blockchain provides is distributed, publicly verifiable ledge of transactions.
In simpler terms, the technology makes it easier to make sure that no one entity controls the records of transactions and those transactions are much easier to verify.
These aspects are what make it appealing (but not perfect) for digital currencies. The technology is also an excellent solution to challenges where voting is required or in use cases like supply chain verification.
Thinking about the blockchain, lead to the core ideas of Web3. This group of technologies isn’t formed yet but they are starting to come together in some experiments.
Web3 aims to deliver a distributed and robust system for building applications and distributing information. This system would also have a native ability for financial transactions.
A big push behind Web3 is to break up the titans of tech powering today’s internet. This movement takes aim at Facebook/Meta, Google, Apple, Amazon, and more.
The biggest challenge with the current direction is that it ignores many of the simple economic realities of the modern web. There is an unbelievable amount of resources that power the sites and services we use everyday.
In order to justify the use of those resources, some type of business model is required. I don’t agree with the methods or assumptions behind adtech but it certainly has powered some amazing innovation.
Web3 has a long way to go and it’s too early to know if it’ll pan out.
That brings the last of the big three of these technologies; NFTs.
Casey Newton has a great run down of two major communities—gaming and music—with very different reactions to the technology.
This demonstrates the conflicted nature of NFTs. We know that they are simply data files that point to something online. Anyone can verify who issued them and who currently owns them.
This is a useful thing to be able to prove but it required other structures—like copyright, property laws, and licensing agreements—to actually work.
Right now, NFTs are being tried out in any number of scenarios. Some will work, others will fail. Some require the efficiencies of centralization which runs counter to the Web3 ethos.
The one undeniable truth of blockchain, Web3, and NFTs is that these technologies are still in their early days.
We need more experimentation. More thought. And more discussion around how they are best used.
]]>NFTs continue to pop up everywhere. There’s a lot of hype around them and a lot of money is involved in the NFT and Web3 ecosystem.
Believe it or not, the hype part is a common pattern for technologies. Research & analysis company Gartner has a famous research methodology around, “The Gartner Hype Cycle.”
Simply put, new technology comes on the scene, there’s a ton of hype, massive frustration, then gradually it settles into reasonable usage.
Not all technologies make it through the hype cycle but we learn something from every one that tries.
NFTs are nearing the top of the their hype…hopefully.
A NFT is just a data file recorded on a blockchain. This data file follows a standard that lays out the minimum requirements.
That standard states that at its core, an NFT is just a name, description, and a link to something.
That doesn’t magically grant you a license or ownership over something. Other mechanism or tools do that.
I am not a lawyer but it’s actually the terms and conditions of the NFT project that dictate what the NFT signifies. Reading through these contracts are critical.
Recently, a project called HitPiece claimed to be selling unique NFTs for music. The project promised a one-to-one relationship between an NFT and a hit song.
They promised that artists would earn royalties every time an NFT changed hands and that owners of the NFT would gain “access and experiences.”
Of course there was immediate outrage. This project had nothing to do with the artists whose work it was trying to sell. Within hours the site was taken down and now displays the message, “We Started The Conversation And We’re Listening.”
Now, the Record Industry Associate of America is taking legal measures against the project.
HitPiece is one of all-to-many scams associated with cryptocurrencies and NFTs. While we’ve seen lots of hyped up technologies fail spectacularly, we’ve never seen this much money and this many people be hurt by those failures.
That doesn’t rule out the future or potential utility of NFTs.
But as with any technology, we need to move through the hype cycle before we see that potential realized. If you view these early efforts as experiments, that might help you better evaluate the risks.
]]>A recent study from Microsoft showed that only 22% of the Azure Active Directory customers are using multi-factor authentication. That’s a much better result than Twitter, which after a major push had 2.3% of their accounts enable this feature.
Authentication is the term the security community uses to to verify who you are. To do that, we usually ask you to provide at least one of the following;
There are called factors.
More sites—like Twitter—ask you for a username and a password. The password is a “something you know” factor.
Passwords are a frustrating solution to a really challenge problem. There’s a lot of misconceptions about passwords out there that have built up from a lot of bad implementations.
The tl:dr on passwords is the longer the better. Think passphrase not password.
But, the reality is that passwords can and are regularly compromised. You need a better way to prove that you are in fact, you.
Enter two factor (2FA) or multi-factor authentication (MFA). This is when instead of just asking for one thing you know, the system asks for more proof.
Ideally, these additional factors are in different categories.
Most multi-factor systems use either a smartphone app to generate a unique, temporary code or they text you one. This proves “something you have” …the phone you set up for your second factor.
Adding this second factor adds a little bit of friction to the act of logging in. However, it’s a major frustration for hackers.
It’s so frustrating, that Microsoft claims it’ll stop 99.9% of attacks on user accounts. I don’t know if that number will hold up but it certainly makes compromising your user account significantly harder with minimal effort on your part.
Knowing that, it’s time to take five minutes and setup multi-factor on all of your accounts; like Gmail, Facebook, LinkedIn, and Twitter.
]]>Cybersecurity research has a long and interesting history. The dynamics between researchers, mischief makers, hackers, and companies is complex to say the least.
Understandably, companies don’t want to look bad. Most spend huge swathes of their budget on marketing and positioning themselves in a positive light.
Having security holes exposed to the public can have a negative impact on a companies reputation and their bottom line.
Countering that, most companies also want to ensure that their technology works as intended and only as intended.
How can an organization balance these needs?
In most jurisdictions, the laws around computers and hacking are overly broad and out of date.
This can—and has—lead to lots of legal trouble for well-intentioned researchers.
These laws have also led to convictions for cybercriminals who have taken advantage of thousands.
The bug bounty system emerged to provide guardrails for the interaction between security researchers and the company hosting the bug bounty program.
The program lays out the types of research that are acceptable and how that research can be conducted. Furthermore, it creates a structure that usually defines compensation for the researcher and how and when—or even if—the issue can be discussed publicly.
For researchers, a public record of accomplishments is a critical aspect of career progression.
While not perfect, bug bounty programs are much better than the unstructured approach of the past.
The keys to a successful bug bounty is clear communications. The boundaries and process needs to be established before any research is done.
Why? Because looking at just the technological steps taken, there’s isn’t much of a line between security research and hacking. Yet the difference between the two is night and day.
Security research is done with understanding and consent. It’s a process where the goal is security improvements and gained knowledge.
Bug bounties can help protect companies and researchers alike. They establish a working relationship and set the proper expectations on both sides.
They aren’t perfect. But they are a strong move in the right direction.
]]>Another hack. Another setback for Web3. Another code quality issue costs millions.
Wormhole is a service that helps you move value from one blockchain to another. So if you want to take 1 SOL from the Solana blockchain to the Ethereum blockchain, Wormhole will help make that happen.
The way it does this is through smart contracts.
A smart contract on one blockchain makes sure you have the funds and then holds them. On the destination blockchain, another smart contract buys the necessary funds and transfers them to the purchaser.
Conceptually, this is very straightforward.
Smart contracts are code. Code almost always has bugs.
If a malicious actor finds those bugs before the developer does, bad things usually happen.
That’s exactly what happened to Qubit Finance recently. And that’s exactly what just happened to Wormhole.
A malicious actor found the bug in the smart contract system Wormhole uses and leverage that bug to steal cryptocurrency on three different blockchains worth an estimated total of $326 million dollars.
What happened to Qubit Finance and Wormhole was unfortunate and it was wrong. In both cases, it appears that the actions were malicious and ill-intentioned.
Both of these cases highlight the need for extremely details and rigorous testing of the code that runs smart contracts.
Developers working in Web3 and specifically in the DeFi (Decentralized Finance) areas need to understand the risk that error in the code pose.
It’s one thing if a video game you write has a bug in it. The consequences are usually a bit of player frustration that goes away when you fix the bug.
If the Web3 world, a bug can cost you millions.
]]>3TB of data involving airport staff across Colombia and Peru was exposed to anyone on the internet. The report called out an “unsecured AWS server” as the source of the breach.
Digging a little deeper, we find that the data was stored in Amazon S3, not an unsecured AWS server.
Why does it matter?
Running a server takes a lot of work. There are updates to run regularly, resources to monitor, configurations to tweak, and threats to monitor for.
A server has a high operational cost.
They should be avoided whenever possible. Technologies like serverless, containers, or managed services are almost always a better choice for your solutions.
Amazon S3 is a managed service. This reduces the operational cost of managing your data and should made having a strong security posture easier.
In Amazon S3, you create a bucket and you put objects inside that bucket. Think folders and files.
The shared responsibility model that runs the entire cloud dictates that you are responsible for the data you store in S3 and the configuration of S3 as a service.
This is where the challenge lies.
By default, an S3 bucket is only accessible to the user or system that created it. It’s locked down and you have to explicitly choose to provide access.
This access is typically provided through a service called IAM or Identity and Access Management. This is an immensely powerful service that allows you to create all sorts of different access scenarios.
Unfortunately, it can be tricky to wrap your head around this system and how to configure it to match your intentions.
Mistakes here are what leads to these types of data breaches.
If you’re wondering why it’s always Amazon S3 and not another CSPs offering, it’s simply a matter of scale. S3 is one of the most popular cloud services out there.
When using a system like IAM, you want to apply the principle of least privilege. This concept means that you only grant the permissions required to the people or systems required to complete specific tasks.
If you only need to write a file, you only get write access to that specific file.
Unfortunately, this principle seems to be in direct opposite of idea of “get it working ASAP.” 🤦
This leads to a lot of permissions being set to “anyone can do anything.”
Sadly, given the flexibility of the system, even when you try to set least privilege permissions, mistakes still happen.
This is when you should use a tool called that Access Analyzer for S3. This free tool helps ensure that what you’ve set as permissions matches what you want as permissions.
That’s a critical step to protecting your data and staying out of the headlines.
]]>The deeper I dive into the world of Web3, NFTs, and the blockchain, the more I’m fascinating. This is a compelling confluence of technology, communities, grifts, and possibilities.
Color.museum came across my Twitter feed recently. After reading and re-reading their proposal, I’m still not sure if this is a serious endeavour or an elaborate joke.
The projects tagline is, “Own the building blocks of new art/the future/the metaverse.”
If you participate, you select your desired colour, give it a name, and then a short description. This is then minted as an NFT on the Ethereum blockchain.
The claim is that you will “own your color for eternity.”
That’s a bold claim. One for which there’s absolutely no evidence to support.
Based on the information provided by the project, what you actual get is an NFT with the following unique information. There is other data in the NFT but it isn’t related to it’s unique-ness.
{
"name": "The Name You Provided",
"hex": "#000000", // the colour value you chose
"description": "Description you provided",
"imageSocial": "https://LINKED-TO-AN-IMAGE.png",
"dateMinted": "13 December 2021, 13:54 GMT",
"image": "https://LINKED-TO-AN-IMAGE.png"
}
This data is stored on the blockchain under a token ID that is the decimal value of the “hex” field…assuming that’s available.
If you follow URL in the “image” fields, you arrive a nice colour tile that shows the token ID, hexadecimal value, name of the colour, the Color.Museum logo, and a big splash of the colour.
That’s it.
The structure above 👆 isn’t that different from any other NFT. You’re buying some type of license for the artwork or data stored at the URL. Some NFTs also offer benefits that apply while you own the NFT, things like membership to a club.
With Color.Museum, the implication is that you would own this colour. That’s an amazing deal…if it was possible. You can’t own a colour.
This project is positioning itself as “a Meta NFT.” They state that this is “a building block to spawn new NFTs, or derivative NFTs.”
The implication is that is someone else wants to mints an NFT using this colour, it would reference your Color.Museum NFT and there would be some sort of renumeration.
I’ve read through all of the materials available. There is not method by which to enforce this structure. Nor does this project have any legal position to own or license colours.
The Web3/NFT space is truly the wild west and this is one wild idea. At the end of the day, it looks like—yet again—all you’re buying is a license to an image.
…and these ones aren’t even that creative.
]]>The concept of a smart contract is exciting. It’s a program that runs on the blockchain executing the terms of an agreement.
Unlike physical contracts, the code in the smart contract is much clearer. In a physical contract, language can have unintended consequences. This is reason why there are so many lawyers involved in our systems.
They are present to provide the guardrails and guidance to ensure that the language in a contract reflects the intention of the parities involved.
One of the goals of a smart contract is clarity.
The code in the contract evaluates the various conditions to validate is they are true before executing the contract.
If the agreements states that 10 Ethereum (ETH) are required in order to transfer ownership of an NFT, the contract validates that 10 ETH are in fact involved in the transaction.
This greatly implies these agreements. It doesn’t make them bulletproof though.
Code almost always has bugs. And in a smart contract, that can mean the difference between a successful agreement and one party losing everything.
A user on reddit recently posted a story where they lost half a million dollars of ETH through a smart contract.
This quote is telling, “Sent ETH to WETH contract and got WETH back (after some googling I found this is how the contract works).”
Here in lies one of the major problems. The everyday user will not or cannot read the code in the smart contract.
Just like legalese in physical contracts, the code obscures that actually will happen from most parties.
Even if you can read the code, that doesn’t mean there aren’t bugs in it.
When building software, we constantly test. These tests check to see what happens when the code gets unexpected inputs. When it’s run multiple times. When it’s called out of order. And all sorts of other conditions.
The goal is to exercise the goal is all different ways to find out how it reacts. Software is complicated. Testing helps us better understand the code we’ve written.
Testing of smart contracts is still in its infancy. Just like the coding language that runs these contracts. This is an area that needs a lot of work…and fast before more people lose more assets.
]]>An interesting aspect of Web3 is the smart contract. These are simple programs that run on the blockchain when certain conditions are met.
For decades, we’ve been creating more and more software and one thing has always held true: there will be bugs.
A bug in the code is simply a mistake. Given the complexity of the environment that mistake could be minor or it could be catastrophic.
Security issues are a very special class of bug. These bugs allow a malicious actor to manipulate the code in order to produce a result that the actor desires…not what the programmer intended.
Recently, Qubit Finance, a DeFi/Web3 platform, was attacked. Their incident report details the bug that the attacker took advantage of.
This bug allowed them to abscond with almost $80 million in cryptocurrency.
Coverage of the attack uses the term hacker to describe the malicious actor. Unlike some other cases, this use of the term is accurate. Though cybercriminal is even more specific.
Why would this attacker be a hacker when another isn’t? Intention.
The compromise of the OpenSea marketplace used the system in the way it was intended to be use. The actor bought an NFT from a valid listing on the platform through a valid transaction.
It went against the intention of the seller but the issue was with the platform, not the use of it.
In the Qubit Finance case, the technical specifics are very different. Qubit’s platform is designed to bridge one cryptocurrency to another.
Take X of Ethereum and convert it to Y Binance Coin.
The attacker used a bug in the Qubit smart contract to take 0 Ethereum to convert to almost $80 million in Binance Coin.
While a conversion was done, the intention was to convert a non-zero amount of one cryptocoin at fair market rates to another. That didn’t happen here.
As smart contracts start to proliferate, code quality is paramount. The argument has been made here that the Qubit case is a cybercrime.
A strong case also exists that it this smart contract executed as designed.
After all, is the blockchain supposed to be the single source of truth? And that contract executed as written.
This issue highlights just how much work needs to be done in the Web3/DeFi world before it’s ready for mainstream adoption.
]]>The word “hacker” gets throw around a lot. Anytime there’s a cybersecurity issue, the malicious actor is labelled a hacker.
Most of the time, it’s an accurate label.
However, the line can be very blurry very quickly.
Hacker originally didn’t refer to a cybercriminal.
The word was used as a label for someone who loves to breakdown computing problems and systems and solve them in creative ways.
This definition is still listed in the dictionary but has fallen out of use. We—the original hacking culture—lost control of it over time as it was used again and again in the mainstream to refer to malicious activities, not acts of creation.
As with science, economics, or any complex technical topics, simplifications must be made in order to make any issues accessible to a broad audience.
In the case of cybersecurity, that simplification is “Anything unintended == hacker.”
Honestly, it’s not a bad trade off. But it’s also not always accurate.
NFT marketplace OpenSea recently had an issue where some NFTs were sold at a dramatically reduced price compared to their current trading value.
Due to the nature of the blockchain, these transactions were considered valid.
Of course, it’s a hack right? Some hacker broke into OpenSea and committed a crime (probably fraud or maybe theft?) in order to pull this scheme off.
Not so fast.
It turns out—in this case—that there were multiple listings for the same NFTs. Earlier listings that the owners thought removed, were only removed from the web interface.
These listings were still accessible via the API or application programming interface. The unexpected transactions were conducted via the API using the lower price listing.
OpenSea treats the API as the single source of the truth. Users (buyers and sellers) see the web interface as the single source of truth.
The hacker in this case took advantage of that and completed a completely valid transaction on the platform.
So are they a hacker or a smart buyer?
]]>Apple’s FaceID technology has made it a lot easier for you to get into your iPhone or iPad. Swipe up while looking at the device.
It doesn’t get much easier than that.
It comes as no surprise (hopefully) that FaceID needs to see you face in order to work. If you’re wearing a scarf or mask, it won’t work.
This has been a challenge for those of us who have deep winters since the start. But with the onset of the pandemic, everyone else started to be impacted.
To be clear, that impact isn’t very significant.
Instead of looking at the device and swiping up, you now have to swipe up and type in your passcode.
Most people don’t use a passphrase which means they are typing in six digits. More work than a look, but hardly a show stopper.
The most frustrating part of the process wasn’t so much typing in your passcode but the delay as your iPhone tries to figuring out if you are actually you by scanning your face.
A few months back in iOS 13.5, Apple added mask detection to FaceID. When the system determined that you were wearing a mask, it would prompt you for your passcode.
This saved only a second but that made a world of difference.
Now, Apple is in the late stages of testing an update to FaceID. This will allow you to login fully wearing a mask.
This opt-in feature will reduce the accuracy of FaceID but shouldn’t impact your security in any meaningful way.
Right now, FaceID uses the TrueDepth camera to map your face using over 30,000 different data points.
That data is then mapped to a specific model that trades perfect accuracy to allow for a reasonable amount of variation. If you’ve had a rough night, didn’t shave, or just paid a visit to the dentist, FaceID still works.
Those trade offs means that there’s a one in a million chance of someone else being able to unlock your phone with their face.
Using FaceID with a mask combines the mask detection feature with a more generous tolerance for matching your face. This will increase those odds an unspecified amount.
However, combined with the limit of five FaceID attempts before prompting for a passcode, requiring a passcode after a reboot, and other protections, this feature should only improve the usability of your device without significantly increasing your risk.
]]>I try to solve to many problems and it’s making most of those problems worse.
Similar to the challenge I’ve had with “good enough”, as a technologist I suffer from NIH syndrome.
I’ve been learning to code for thirty five years. In that time I’ve written code that has never seen the light of day and code that’s been used by millions of people.
To this day, I still see useful tools and think, “I can build something like that.”
The deviousness of that statement is it’s true. I could probably build something like whatever tool I’m looking at.
However it misses the key question, “Should I build something like that?”
Very often, the answer is “No.”
I create a ton of content. To keep that pace, I use a lot of very handy tools. Most of these tools do one thing very well. Focus is a great attribute for a tool.
It also moves that tool close to the “I can build it” zone.
This isn’t just a “me” problem. This comes up constantly with teams building technologies. Teams always feel that urge to write their own solution instead of leveraging a great open source project or paying for a tool.
Teams that fail give in and spend time reinventing the wheel. And then maintaining wheel 2.0. Great teams can push through this urge and put an importance on their time.
The biggest frustration with this issue is that it often occurs in areas that aren’t tied to the core problem that I’m trying to solve.
It’s often the technological equivalent of trying to make your own butter when you’re trying to make chocolate chip cookies.
Could you? Sure. Should you? No.
Just use the off the shelf butter and focus on getting those delicious chocolate chip cookies in the oven and then into your belly!
When building technology and writing code, you need to constantly remind yourself to focus on what matters.
Focus on what gets you closer to your goals and not whats simply possible.
…now I have to go make chocolate chip cookies 😉
]]>I had a conversation with a friend today about accepting “good enough” as a result. For year, I’ve absolutely f**king hated “good enough” when it comes to my work.
…and that’s held me back in a number of ways.
I constantly fight the urge to continue to work on something. To put just a little bit more time into it. To make it just a little bit closer to perfect.
Of course, no matter how much time or effort I put into the work, it’s never perfect. Perfect is impossible.
Ironically, there is always a point where I’ve put too much work into something and it suffers because of it.
It’s taken year and years of mental effort to push past my aversion to “good enough.” It’s still there. I still cringe when I stop working on something and declare “good enough.”
This was one of the motivations for my joining the effort in #Ship30for30, to push through “good enough.” Writing 30 atomic essays in 30 days requires me to just let go a “good enough.”
I don’t have the time to keep making those changes to the work. You can only iterate so much in the time allotted before you’ve got to move onto the next essay.
I’m still deeply uncomfortable shipping things when I still view them as incomplete. But I’m also often shooting for a 7 when a 3 would do.
And honestly, In the moment when I can objectively evaluate what I’ve done, most of the time good enough actually is.
]]>It’s the first day of data privacy week.
Those in the community will spend the week trying to raise awareness within the general public and advocating for various acts of privacy legislation or reform.
For most, though, it’ll pass without even a nod of the head.
That raises the critical question, “Is Digital Privacy Important?” If the general public isn’t calling their representatives or voting with their wallet to ensure their privacy rights, is that a key indicator?
Is the answer simply, “No.”?
I don’t believe so. I believe that most people care about their digital privacy. They want awareness and control of their data online.
The problem is that this is such a complex and nuanced issue that it’s extremely difficult to understand the scale at which you’ve lost control of your data.
Once hair that is split all too often is the idea of “user data.” When the everyday person refers to user data, they mean data that they have and data about them.
When a company in the data market—collectors, brokers, etc.—talk about user data, they mean data that you have explicitly shared. The photos that you upload to your social network, your posts, the profile information to added. Things like that.
The data about you and your behaviours aren’t included in their definition of user data. They file that under “data about a user.”
This is a useful dodge when dealing with the public. Doubly so when dealing with legislators.
I wish I could make a call to action that would change the situation but there’s isn’t any one thing. Time and time again, when people are aware of how data about them is created and handled, they want change.
The best thing that you can do is to inform yourself about how the services and apps you use handle and harvest your data.
Understanding that, you can make different choices for some services and ask the services you can’t change why they are collecting so much data about you.

The USA made headlines last week as a number of international airlines cancelled flights into the country. The President of Emirates airlines described it as, “…one of the most delinquent, utterly irresponsible issue…I’ve seen in my aviation career.”
If you’re interested, you can read more about it.
This issue reminded me of the challenge of breaking down a complex, technical topic for a broad audience. This is how I do it…
The first step to research to topic to a reasonable depth. The goal here is to hit ~85—90% awareness of the topic. After that point, the knowledge you gain is too specific and not applicable to the core idea you are going to communicate.
While you’re researching, make sure to keep notes on the key points that keep recurring. This is going to help you in the next step.
Research done, I look through my notes. I’m trying to find the key points that are referenced often. What are the key points of this issue?
The trick here is to make sure that you boil it down to just the essentials.
It’s tempting to address the nuance. Don’t.
At least don’t the first time you explain the issue. You can always dive deeper later on.
Now it’s time to weave these key points into a narrative. When discussing technical topics, this storyline is going to be aligned to timing of an event and the layers required to understand the issue.
Make sure you are only communicating the key pieces that are required for the audience to understand the issue and no more.
Connect those pieces together in a way that tells the story of the issue. What happened is only part of if, the audience needs to care about the why too!
]]>Yesterdays atomic essay ended with the line “…um…”
That line probably broke a number of English grammar “rules” and left things on a bit of a cliffhanger.
Neither is a good thing for a risk analysis.
Why didn’t I draw a conclusion or provide the read with a tip to help the read figure out how likely an app is to have security problems?
Because it’s simply not possible.
And that’s a big problem.
For years we saw security claims like “military-grade encryption” or “security seal” from a recognizable name in the security space.
Those claims don’t really mean anything. There are some claims—certifications—that can provide insights into a company’s security if you understand what they are
But you’ll need a deep understanding of the cybersecurity space to grasp their importance.
Verified compliance with frameworks like ISO27001, HITRUST, PCI-DSS, and others set a bar for security and require third party validation that a company meets that bar.
But if you’re not a cybersecurity practitioner, who’s heard of those?
Certainly not the everyday user.
If there isn’t a simple way to determine the likelihood of a security incident happening, how can you make an informed risk decision?
You can make a reasonable determination about the data you’re trusting a company with, after all you’ve provided most of it.
To complete the second half of the function, you could assume 100% chance of your data being exposed and then act accordingly.
For sensitive data, this is a reasonable way forward. For data that’s not sensitive, it’s not a crazy approach either.
The problem is that we should have better information. In this case, better guarantees that companies are required to take cybersecurity seriously.
That would start to provide us with the data we need to determine the probability of an issue instead of just guessing.
]]>If we know how to evaluate the privacy impact of a mobile app. How do we determine the second part of the risk function, probability?
There’s no data about the probability of an incident listed in the App Store alongside the app. There’s no metric, information, or even a hint of how seriously this developer—or any other—takes security anywhere near the app listing.
Striking out on the App Store, where can you turn to?
You can search the app/service/developer name to see if there’s any mention or history of security or privacy issues in the past.
This only takes a minute.
Unfortunately, this doesn’t produce any level of assurance unless an issue was big enough or handled poorly enough to get noticed.
Maybe there’s some protection in the law? The US is a major exception here as they don’t have strong, federal privacy legislation (it’s state by state and hit or miss).
Other jurisdictions like Australia, Canada, Japan, and the EU all have overarching privacy legislation that mandates some level of security effort by a company that collects personal data.
The challenge here is that it’s almost always worded as “make a reasonable effort” and the penalties for failing aren’t significant.
The notable exception is the EU where penalties for poor security are up to 2% of an organizations global turnover (and another 4% if they mishandle a breach).
Ok, so what’s the trick then? How can someone who’s about to download something like a mobile app understand what measures are taken to protect their data.
…um…
]]>Risk is a function of the potential impact of something happen and the likelihood that the thing will happen.
We also know that digital privacy is the assurance that the system you are using only uses the data you intend in the way that you consent.
Let’s apply these concepts by examining the privacy disclosures in an Apple App Store listing.
We’ll use the App of the Day for Apple’s iOS as our example.
Today, it’s Explain Everything Whiteboard. This app is an award winner and aims to help you “teach, present, sketchnote, record videos, and work together.”
A required part of the App Store listing is the “App Privacy” card. This critical tool that helps you evaluate the potential impact of a breach or issues with the app.
“Explain Everything Whiteboard” tracks some data that isn’t directly linked to you; contact info, usage data, identifiers, diagnostics.
Digging deeper we find out that they collect a user ID, device ID, email address, diagnostics data (like crash data), and product interactions.
This “App Privacy” card is really a cheat sheet. For specifics, we’ll have to read the developer’s privacy policy (yes, really). This is a legal document that explains what’s legally possible from the app.
Given that the app allows you to collaborate with others and share your whiteboards, it’s reasonable to expect the collection of information in order to enable that.
The device ID, user ID, and email address are the pieces of information that are needed to do that.
If this data was exposed would it impact you?
Probably not. Your email address is public. You enter it everywhere. The device ID and user ID are also reasonably public as any app on your device is potentially going to have access to those as well.
The other information collection by the app is data designed to help optimize the app and focus the developers efforts. There’s nothing really sensitive there.
The App Privacy card has given us the information we need to understand how this app uses our data. With that, we’ve made a reasonable evaluation of the impact this app could have on our privacy…which is negligible.
Rinse and repeat this process for any app you’re interested in using from the App Store.
]]>Data Privacy Day is coming up on the 28th of January. Privacy and security are closely linked and both issues that I care deeply about.
And while I’ve been preparing content for the day, one question continues to haunt me.
Does anyone care about digital privacy?
The broader concept of privacy has many interpretations and is very different across cultures.
When it comes to digital or data privacy, we’re talking about how information about you is collected, processed, and used.
At the speed and scale that our technology works today, this is a major issue. Or at least it should be…right?
The rise of social media at the dawn of the millennia changed how we view information about ourselves. We started to share freely with the world things that we only used to tell our close friends.
Whether or not this has been a positive change is a debate for another time.
The fact remains that 2.9 billion people log into Facebook each month. The amount of content they are sharing is mind boggling.
Does the normalization of sharing like this mean that privacy isn’t a concern?
Facebook—and other social networks—allow the user to control who sees their posts but public is the default for most networks.
In order to preserve some sense of value to my privacy advocacy work over the year—and hopefully actually valid if you could ever measure it—I have a theory.
The complexity of the technology involved prevents most people from understanding the issues around data privacy.
It’s not that people don’t want control over their data, it’s that they don’t realize the extent to which they’ve lost control already.
It’s not immediately obvious to most people the amount of data that your devices and the software running on them have broadcast publicly. All so you can (ironically?) read this essay on privacy.
Every year I hope that Data Privacy Day moves the needle towards a more secure technology future where people are aware of and have control of their data online.
I’m honestly hopefully that’ll happen.
But I’m pragmatic enough to realize that it’ll take a lot of small steps to get there.
]]>I spoke with Hallie Cotnam on CBC Ottawa Morning on 19-Jan-2022 about this issue and how the US ended up in this situation.
5G is here. 5G is rolling out. Just wait for 5G.
What is going on?!?
For the last few years, the mobile industry has been a buzz with 5G this and 5G that…and for good reason. The fifth generation (5G) of mobile network technologies has a lot to be excited about.
The first thing you need to understand about 5G is that it’s not just one thing. As a term, 5G covers a range of technologies that are all aimed to take better advantage of specific areas of radio frequency.
The early cellular networks were not very efficient. They had a limited capacity to connect customers. That was ok, not many of us had cell phones back in 1970’s and 80’s.
But now, everyone has a smartphone.
For most conversations, you can think of 5G as three distinct technologies; the low, mid, and high band.
Low band is slightly faster than what we have today in 4G/LTE networks. It’s got good range and reasonable speeds. This is great for general use cases.
Mid-band is the sweet spot for big cellular networks. It’s reasonable range and has great speeds. This will be the new normal in a few years.
High band doesn’t go very far but is crazy fast. This section of 5G is best used in high density location or small spaces. Think stadiums (lots of people) or hospitals (small spaces with high bandwidth needs for those surgery robots).
Part of the problem is that in depending on the cellular network or phone provider, you may have had a “5G” phone or connection in your hand for the past few years…even though you didn’t notice anything different.
You’re not imagining that. Way too many announcements have pumped up the 5G hype train.
Which brings us around to the current issue in the United States.
In 2019, AT&T and Verizon won the auction to license the mid-band (or C-band) of the 5G spectrum in the USA. The price tag? $67 billion.
December 2021 was when this part of their 5G networks were supposed to go live. That got delayed repeatedly, eventually ending up on a 19-Jan-2022 launch date.
Why?
Despite the 3 year design and rollout post-auction, it took until just recently for the aviation industry to really grasp the importance a potential issue. An issue the CEOs of United, American Airlines, Southwest, and Delta called “potentially catastrophic.”
It turns out that the frequency used for mid-band 5G (3.7—3.98 GHz) is “next” to the frequencies used by radio altimeters (4.2—4.44 GHz).
A radio altimeter is a very important piece of equipment. It helps the airplane know what altitude it’s flying at…especially critical in adverse conditions.
The altimeter is critical to several systems onboard the aircraft, including the auto landing and other systems.
With the gap between the frequencies in use, why is there an issue?
Like most technology, radio altimeters aren’t perfect. Older models could be susceptible to interference from 5G signals given their strength of those signals at lower altitudes.
Is this a problem? No one is quite sure.
The FAA (Federal Aviation Administration) issued a regulatory assessment on 23-Dec-2021 advising everyone of the potential risk and calling for collaboration in order to research the issue.
This issue really boils down to two things. One it much easier to solve than the other.
The easier problem mitigating the potential impact of the rollout.
This has been a known issue well before the spectrum auction was held in 2019. The last minute advisories, open letters, and hyperbole should not have been needed and have muddled a reasonably simple issue.
There have been multiple opportunities since the auction to address the issue on any number of fronts…that didn’t happen. The result is the battle in the media until a reasonable compromise was reached last minute.
AT&T and Verizon will go ahead with the rollout of this section of their 5G networks as planned but will not enable any towers near airports. Effectively creating little protective bubbles around the airports until this can all be sorted out.
Don’t worry! 4G / LTE will still be available in this 5G-less bubbles.
The harder problem is how to balance the pressure new technologies are putting on the spectrum and how they can co-exist with older technologies.
Newer technologies are digital and use protocols and filters that can ignore signals that aren’t intended for them. As long as something isn’t blasting so loud on frequencies they shouldn’t be that it drowns everything else out, the filtering built into new technology is sufficient to avoid problems like this.
That leaves the multitude of other technologies that are using spectrum with assumptions that no longer hold up.
This has been managed by licensing of the spectrum that included specific limits on broadcast power. That’s why you can listen to 91.5 MHz FM radio in one city and get a completely different station on 91.5 MHz in another city.
Their broadcast licenses limit the power of their signals and reduce the chances of interference.
That system works fine as long as things are stationary and everyone respects the rules. Newer technology is far more precise and spectrum is a finite resource.
We’ll see more demand for licenses that are closer and closer to each. There won’t be as much space between uses. This issue will pop up again and again until we phase out older technologies.
But those technologies are working just fine. It’s hard to justify the replacement of a technology used in one industry for the benefit of a completely different industry.
As usual, the technology side is pretty straight forward (if still challenging) but it’s the economics and motivation that are the roadblock.
With alarming regularity, the information security and privacy world has the same debate with governments around the world.
Every few years there is a push to implement back doors in encryption technology. Regardless of the government, the straw man argument is the same; back doors in encryption are the only thing that can keep everyone safe.
End-to-end encryption (E2E) is a technique that ensure that only the parties involved in a communication can access it’s contents.
Information about the communication is still visible to the network is travelled across. Things like source, destination, size, etc. are all required to actually get the message from one place to another.
So it’s possible to know that you and I communicated on Tuesday from 09:21 to 09:33 and that connection involved the transfer of 486 MB of information.
What did that communication entail? Only you and I can tell someone that.
The next question asked by most people is, “Why would you want to encrypt your communications?”
I think the opposite question is more enlightening, “Why would you want any number of network providers and service providers to be able to see your communications any time they want?”
If communication is not encrypted, it’s trivial for someone in support to look at that information. Same goes for someone running one of the network providers.
Will they do that? We don’t have the hard data to properly scope this risk. But we do know that breaking encryption makes that interception trivial.
The argument against encryption, or more specifically to include a backdoor in encryption schemes, is that governments and law enforcement should be able to monitor any communication.
In most countries, this is a capability already written into existing law for traditional technologies. However, those capabilities were granted well before the internet and haven’t been reexamined for today’s digital age.
To be clear, a backdoor that only the “good guys” can use is a fallacy. The experts agree.
A backdoor is an introduced weakness and any weakness can be exploited.
Open, transparent discussion is required to find a balanced path forward. Using the straw man of keeping you safe only sets up a conflict.
This discussion must seek to balance all concerns with data to support the highlighted risks.
]]>If I asked you to cross the street without looking for traffic, but instead just guessing if there was an oncoming car, would you cross?
Of course not! That’s absurd. Why take such a huge risk with any data to make an informed decision.
Sadly, in the digital world, we’re crossing the street all of the time. Digital risk evaluation is often based on best guesses and not data.
Remember, risk is a combination of the impact of an event and the likelihood it will happen.
Say you’ve just download a new mobile game and it wants you to create an account in order to play.
That process requires you to enter your full name, email address, physical address, gender, annual household income, and phone number.
When you’re deciding to whether or not to give the game this information, what are you basing the decision on?
The look of the app? The fact that they haven’t been in the headlines? Their terms of service and privacy policy?
It usually boils down to reputation. Why?
To properly figure out the risk here, we need to understand what could happen.
This information is considered PII or personally identifiable information. Some of it is available other places (like the online shops you frequent) but it’s not something you want to give out readily.
What is the impact is this information being in the hands of the app developer? What if the information was made public?
The information could make you a target in the real world.
After all, the household income typically correlates to the amount of goods in the home. It also indicates the earning potential if someone stole your identity.
The impact could be massive.
The challenge is now finding data that could indicate how likely that impact is to occur.
You could search online about the company’s reputation. Have they had breaches in the past? Do they have a clear privacy policy and terms of service?
You could look for discussions about their reputation. Information about how they use the data they collect and how they make money will help here.
But at the end of the day, it’s not enough to make an informed decision. There’s no statistic that says, “This app has a 48% of exposing your personal information.”
That’s no way to make a risk decision, but we do it every day!
For the record, when you can’t gather enough data to get an idea of the likelihood, the potential impact should sway your decision…especially with PII.]]>
Risk is a common topic when talking about cybersecurity and digital privacy.
At some level, we all understand the idea of risk but what are we actually talking about?
Risk is way of describing exposure to danger.
To properly describe a risk, you need to pieces of information;
You need both to make an informed decision. Unfortunately, in the digital world we rarely have both pieces of the risk puzzle.
Let’s examine the risk of someone gaining access to our crypto wallet.
If a cybercriminal gains access to our wallet they could take our cryptocurrency, transfer ownership of any NFTs, and impersonate us online if we’ve used that wallet as an identity.
The makes the impact directly related to what we’ve used that wallet for.
The value of cryptocurrency and NFTs are reasonable easy to evaluate. The impact of someone impersonating us is harder to qualify but is generally on the more serious side of things.
In the digital world, we usually have a good idea of the impact of an event.
Can we estimate the likelihood of this event occurring?
Assuming that you’re not broadcasting your wallet password, how likely is a cybercriminal or other malicious actor to try and access your wallet?
There’s not direct data to support any specific probability. That means we have to estimate…guess basically.
You could take the more secure/paranoid route and assume it’ll happen. This means you’ll take steps to reduce the potential impact because you assume you’ll lose control of your wallet at some point.
You could take the exact opposite approach and assume it’ll never happen. This utopian view rarely ends well over time.
A third possible approach is to take reasonable steps that become more strict as the value of what your wallet can access increases over time.
This is a good approach but it’s not based on any real data.
The lack of probability data means that digital risks are evaluated using “best guess” numbers. There are no alternatives at the moment as the community scrambles to gather more accurate probability data.
But it’s a long way from the data-driven approach we need…
]]>A NFT or non fungible token is the representation of an asset in a smart contract. There are two immediate security challenges with NFTs that you should be aware of.
You need an application or service called a wallet in order to prove that your own this digital token.
This program helps you interact with the blockchain. It helps you provide which assets you own, conduct transactions, and more.
Remember, the wallet doesn’t store the actual NFT, that’s on the blockchain. It does store the bits required to prove that you own the NFT though.
That makes it a security risk.
If you lose access to your wallet. You could lose all of your digital assets with no way of getting them back. That’s the downside of decentralization.
This means you need to protect your wallet diligently and any system that can access it.
As a digital object, the NFT is really a collection of metadata about an asset.
It contains an “image” field that points to the asset itself. This presents a number of challenges.
As written, the standard doesn’t present any method of validating what’s at the URI. Just the ability to add a name and description.
Who own’s that domain? Who’s running that server?
How do you know it will be up and running in a week? In a month? How about a year or two down the line?
Because the NFT is on the blockchain, there’s no method of updating it. That lack of surety around the URI is a security challenge.
As part of a smart contract, NFTs have some flexibility. Their current form is a very basic implementation.
Over time, expect improvements to the format that will help address these concerns. We’re seeing the start of that work within smart contracts today.
But until it matures, make sure you understand where that asset actually lives and do you best to lock down your crypto wallet.
]]>NFTs are all the rage right now. And why not?
There’s an interesting technology that has a lot of potential. Just like the early days of cryptocurrencies, there’s a bit of a gold rush happening.
Some NFT projects will succeed. A lot more will fail. Eventually things will stabilize and some community norms will emerge.
For now though, it’s important to understand what an NFT actually is. Like physically, well…digitally 😉.
Smart contracts are agreements expressed in code, instead of a mountain of legal paperwork.
This code lives on a blockchain so it’s transparent and verifiable. The blockchain will execute the contract automatically without a third party being involved when various requirements are met.
When I pay you X, I get NFT Y.
Of course, smart contracts can be much more complex but you get the idea.
An NFT is a unique record that represents an asset of some kind.
What is represents is entirely dependent on the smart contract.
Right now, we’re seeing NFTs used to represent digital artwork, membership in a group, and even as tickets to events.
When you dig into the code, an NFT is simply a name, description, and link to an image.
That’s it.
Remember, an NFT represents something. It isn’t that thing itself.
Take for example the Bored Ape Yacht Club. This funky collection provides the owner of the NFT with a digital asset (an artwork of a unique Bored Ape) and membership to the BAYC club.
Membership to the club is proven by showing you have possession of the NFT. That’s done through a standardized crypto wallet just like logging in with a username and password.
Proving you own the artwork can be done by matching the NFT to a license granted by the Bored Ape Yacht Club. Restricting usage of the artwork is a whole different matter.
If you want to make sure you are the only one using that Ape as an avatar or printing it out, you need the global copyright system and its enforcement mechanisms. That means a court of law.
NFTs may help prove ownership or licensing agreements but they don’t stop a major problem with digital assets: a complete disregard for copyright law.
It’s critical to understand any technologies goals and it’s limitations. With NFTs they are incredibly useful representations of assets but at the end of the day, they are just representations.
]]>Web3, NFTs, and the blockchain at the apex (hopefully?) the hype cycle right now. These concepts are often talked about as a group but then are in fact distinct concepts and it’s important to recognize that.
Let’s dive in and clarify what each one is…
Web3 describes a new iteration of the web that is decentralized and allows for a common economic system in order to recognize more granular value outside of traditional structures.
This is a shift away from Web 2.0 which saw the rise of centralized data collection for ad tech and data aggregation in social media networks.
The implementation specifics of Web3 are in flux and will be for a while yet.
There is no “cut over” date or launch day. This’ll be an organic process as Web3 services gain popularity.
A blockchain is a linked list of records. Each record can be used to validate the previous record. This means that records can’t be changed without changing every record after it.
This allows the blockchain to maintain its integrity while distributed instead of sitting with one organization or company.
Anyone can verify each entry, ensuring that it hasn’t changed. That makes blockchains useful in any number of situations where verification of something is required. Things like voting, financial transactions, proof of ownership, etc.
An NFT is a non-fungible token (horrible name). This is an implementation of a specific contract stored on a blockchain (so it can be verified). Most commonly, an NFT is an item within an Ethereum smart contract.
The idea is to have a technological method of proving ownership of something. An NFT is a verifiable certificate of authenticity or ownership.
NFTs have been used in a number of creative ways. As tickets to events. Proving ownership for artworks. To representing ownership in a shared endeavour.
Web3, blockchain, and NFTs present interesting ideas to change certain aspects of our digital world. Like any technology or approach there as strengths and weaknesses to each.
The challenge—as always—is recognizing those and making sure to use the best technology to solve the problem at hand.
]]>We use technology in almost every aspect of our lives.
The gap between the expectations of that technology and the reality of its implementation is fascinating.
The internet is held together with bubble gum and duct tape.
But it works…and we continue to see fantastic new innovations and ways of connecting to each other come to light.
When we’re building technology, we need to focus on our goals. Security and privacy are critical parts of those efforts.
Over the past few years, we’ve seen a remarkable reduction in the effort required to create technology solutions. That’s fantastic!
But it’s critical that we remember these solutions have a lifecycle and we need to set our goals to account for that.
If setting a good goal is hard, achieving that goal is even more challenging.
Achieving your goal in tech is hard because those goals are often very ambitious. It’s easy to get swept up in the possibilities of a digital solution.
You start simple and things quickly spin out of control.
Before you know it, you’ve got from the Flintstones to the Jetsons without shipping anything to your prospective users.
One way to avoid this spiral is to use the concept of simple feedback loops.
A feedback loop has three stages; learn, build, and measure.
To be effective, a loop should be short.
The shorter the time between trying out something new (build) and learning about it, the more impactful that learning is.
This is core concept of modern solution development. When done well, feedback loops can help accelerate your work and make sure that you hits your goals.
]]>It’s exciting building solutions with the latest frameworks and technology. Especially when there’s a lot of hype and excitement behind those projects.
However, there’s a lot to be said for using technology that’s tried and true.
Sure, it isn’t as cool as the newer stuff but it’s issues are well documented and knowing that, you can design around them when necessary.
Should you use the shiny new tech or stick to the tried and true?
As expected — hopefully — it’s never a simple this or that decision.
Before you can what a choice on what to build with, you have to figure out what the goal of what you are building is.
Most teams set their goal as to solve problem X.
That’s a start but it’s not finished.
Solutions today are dynamic systems. They have a complete lifecycle. You don’t just write the code and then forget about it.
You need to design, write, and then deploy the system. Once it’s been deployed, you need to keep it running effectively until it’s time to retire the solution.
This broader view brings a new perspective to our goal.
We no longer need to just solve problem X. We need to do so in a sustainable manner.
Updated goal: Solve problem X in a sustainable manner throughout its lifecycle.
Operating the solution requires it to be maintainable.
How does that shiny new framework scale? How frequently is it patched? The answer to these questions will impact the choices you make through the systems lifecycle.
Shiny doesn’t always mean better.
]]>When we talk about privacy, what are we really talking about?
The formal definition of privacy is definitely outdated. Even the common understanding has failed to keep up with our digital world.
In the digital world, privacy as a human right manifests itself in very different ways. It’s not keeping to yourself. It’s about understanding your data and the data about you and how that data is used.
Focusing on the goal of digital privacy is critical to success. That goal is 👇
The assurance that the system you are using only uses the data you intend in the way that you consent.
The challenge (as always) is in the implementation.
Privacy is all about choices. Where it breaks down is the difference between an implicit and explicit choice. The complexity of the technology we use everyday amplifies this challenge, making it extremely difficult for most people to make explicit choices about their data.
Most users would expect that the web page they’ve asked for knows a few things about them. You could reasonably expect that the page knows what browser you’re using, whether or not you’re logged into the site, among other things.
If you’re a software engineer, you might recognize more nuance.
You know there are multiple systems that will see the request. That the browser is capable of a lot more than just showing a page. You understanding that when you ask for one page, you’re probably getting data from at least ten different locations. With a deeper understanding of the technology at play, the engineer knows there is far more data and systems involved.
Given that gap, can a user meaningfully consent to data collection and sharing? How do we even start to address this challenge given how prevalent technology is in our communities?
]]>If the goal of cybersecurity is to make sure that the system you are building works as intended and only as intended, what about stopping hackers?
What about preventing malware and ransomware? What about those ridiculous scenes we see in TV and the movies? Isn’t that security’s job?!?
Calm down, it’s ok. That’s all in there still.
But if your primary goal is to stop something from happening, you’re creating more problems than you’re solving.
In the physical world of security, it’s easy to track and measure a stopping type goal.
Anyone can figure out how to track your performance if you goal is, “Make sure this painting doesn’t get stolen.”
As we cross over into the digital world, things get exponentially more complex. You can’t assume that you would know if something bad happened. Technology is just too complex and it changes too quickly.
The best way forward for cybersecurity is to assume a compromise at some point and try to figure out the best way to reduce any potential impacts.
Our updated goal encourages systems level thinking and requires teamwork.
The goal of cybersecurity is to make sure that the system you are building works as intended and only as intended.
If it sounds odd to think that security teams and people building technology should assume failure in their work. I would encourage you to try and view it as more pragmatic.
For really critical activities, it’s important to have a backup plan.
If you applied to University or College, odds are you applied to more than one school. You didn’t want to be left without an option.
More important than a backup plan, this goal forces teams to work together towards a single goal.
…and working together to build better systems has benefits well beyond just security.
]]>Is this Secure? Does this protect my privacy?
The first question I hear regularly from people building technologies. The second, from those who use that tech. These questions are two sides of the same issue.
Security and privacy are inextricably linked.
There are formal definitions for both but their goals can be stated simply.
Cybersecurity aims to make sure that the system you are building works as intended and only as intended.
Digital privacy is the assurance that the system you are using only uses the data you intend in the way that you consent.
How do you convey intentions in a digital world?
If you searched for sneakers, it would be reasonable to expect ads for sneakers among the other results. You’ve explicitly expressed, “I’m interested in sneakers.”
There’s a clear line between the action and result.
But why do those sneaker ads follow you around throughout the day?
Reading the news, watching sport highlights, shopping for groceries; sneaker ads.
When the teams built those sites or even the browser you use to view it, did they intend for you to be tracked in this way?
Originally, no.
The technology that enables this scenario combines existing standards and stretching features into new uses.
The browser you’re using right now has a relatively unique “fingerprint.” It’s a combination of the features available, your system, where you other, and other factors.
Each of these factors has a different and understandable use.
Yet they are also being used in an unintended way to track you. And there’s a lot more data than you intend being shared about it.
This common and widely accepted workflow violates both the goals of security and privacy.
Why? Because security and privacy were never first class considerations when these systems were built.
Security and privacy were bolted on afterwards and that continues to cause major issues for all of us daily.
]]>In late 2017, IDT did an AWS “This is My Architecture” video. The video talks about how they managed secret information in their AWS environment.
Now, a few years later, I react to that video and see what’s stood the test of time, what could be done simpler given today’s technology, and generally critique the design against the AWS Well-Architected Framework.
The AWS Well-Architected Framework is designed to help you and your team make informed trade offs while building in the AWS Cloud. It’s built on six pillars;
There pillars cover the primary concerns of building and running any solution. And as much as we’d all love to have everything, that’s just not possible.
…enter the framework.
It’ll help you strike the right balance for your goals to make sure that your build is the best it can be now and moving forward.
I often get asked why I talk about building in the cloud and architectural choices so often…aren’t I a security person?
Yes, I do focus on security and architecture is a critical part of that.
There’s really two types of security design work. The first is when you’re handed something and need to make sure the risks of that technology matches the risk appetite of the users.
The second type is when you’re building the technology. This is where making choices informed by security early in the process can have profound effects. You’re no longer bolting security on but building it in by design.
That’s why I talk about architecture and building so much. It’s where we all can have the largest possible security impact!
This video—and the ones that will come after—looks at a specific set of design decisions and how they balance the concerns of the AWS Well-Architected Framework…where security is one of the six pillars.
At the time, IDT was using Hashicorp Consul to manage configurations in their environment. As a part of that, they had sensitive information (like API access credentials) stored with that information.
Consul allowed them to control distribution and access to that data in an effective manner. The challenge I have with the design (though understandable at the time) is the combination of sensitive and non-sensitive (or “not-as-sensitive”) information. That management approach generally leads to some challenges down the road.
A clear separation is usually a better approach. It allows for stricter permissions to be put in place for the sensitive information. This could strike a better balance between the application of least privilege and keeping operational overhead low.
Learn more in the reaction video 👆.
Btw, I’ve updated my course, “Mastering The AWS Well-Architected Framework” on A Cloud Guru. If you want a solid walk through of the ideas behind the framework and how to apply it to your work in the AWS Cloud, check it out!]]>
In late 2017, iRobot did an AWS “This is My Architecture” video. The video talks about how they deployed their microservices in AWS.
Now, a few years later, I react to that video and see what’s stood the test of time, what could be done simpler given today’s technology, and generally critique the design against the AWS Well-Architected Framework.
The AWS Well-Architected Framework is designed to help you and your team make informed trade offs while building in the AWS Cloud. It’s built on six pillars;
There pillars cover the primary concerns of building and running any solution. And as much as we’d all love to have everything, that’s just not possible.
…enter the framework.
It’ll help you strike the right balance for your goals to make sure that your build is the best it can be now and moving forward.
I often get asked why I talk about building in the cloud and architectural choices so often…aren’t I a security person?
Yes, I do focus on security and architecture is a critical part of that.
There’s really two types of security design work. The first is when you’re handed something and need to make sure the risks of that technology matches the risk appetite of the users.
The second type is when you’re building the technology. This is where making choices informed by security early in the process can have profound effects. You’re no longer bolting security on but building it in by design.
That’s why I talk about architecture and building so much. It’s where we all can have the largest possible security impact!
This video—and the ones that will come after—looks at a specific set of design decisions and how they balance the concerns of the AWS Well-Architected Framework…where security is one of the six pillars.
At the time, iRobot was leveraging Amazon RDS (and still may be). They set out to build a way deploy their various microservices in a consistent manner while presenting a unified API to their mobile client and other consumers.
This system needed to provide various build artifacts throughout the process and being entirely serverless at a time when that ecosystem of tools was just beginning to develop.
Learn more in the reaction video 👆.
Btw, I’ve updated my course, “Mastering The AWS Well-Architected Framework” on A Cloud Guru. If you want a solid walk through of the ideas behind the framework and how to apply it to your work in the AWS Cloud, check it out!]]>
The AWS Well-Architected Framework’s Streaming Media Lens helps you understand the challenges of building a streaming video/audio solution in the AWS cloud.
This video 👆 walks you through the highlights of the Lens and explain the key areas of concern for these types of designs.
This lesson on the Streaming Media Lens is part of my course on the entire framework. That course is over 4 hours in length and help you understand the value of the framework and how to use it to build a success cloud team.
This course is available over on the A Cloud Guru platform.
]]>AWS re:Invent is always THE event in cloud. Werner Voegls delivered the latest version of his hotly anticipated keynote on day four.
This 👇 is the Twitter thread of my coverage of the keynote…



@marknca tweeted at 02-Dec-2021, 16:31
Tweet 2/166 👇 Next tweet 👆 Start
Tweet 3/166 👇 Next tweet 👆 Start
Tweet 4/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 16:33
Tweet 5/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:34
Tweet 6/166 👇 Next tweet 👆 Start
Tweet 7/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:35
Tweet 8/166 👇 Next tweet 👆 Start
Tweet 9/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:36
Tweet 10/166 👇 Next tweet 👆 Start
Tweet 11/166 👇 Next tweet 👆 Start
Tweet 12/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:39
Tweet 13/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 16:39
Tweet 14/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:40
Tweet 15/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:40
Tweet 16/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:41
Tweet 17/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:41
Tweet 18/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:43
Tweet 19/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 16:43
Tweet 20/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:45
Tweet 21/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:45
Tweet 22/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 16:46
Tweet 23/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:47
Tweet 24/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:47
Tweet 25/166 👇 Next tweet 👆 Start
Tweet 26/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:48
Tweet 27/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 16:49
Tweet 28/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:50
Tweet 29/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 16:51
Tweet 30/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:51
Tweet 31/166 👇 Next tweet 👆 Start



@marknca tweeted at 02-Dec-2021, 16:53
Tweet 32/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:53
Tweet 33/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 16:54
Tweet 34/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 16:54
Tweet 35/166 👇 Next tweet 👆 Start
Tweet 36/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:56
Tweet 37/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:56
Tweet 38/166 👇 Next tweet 👆 Start
Tweet 39/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 16:58
Tweet 40/166 👇 Next tweet 👆 Start
Tweet 41/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:00
Tweet 42/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:01
Tweet 43/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:01
Tweet 44/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:01
Tweet 45/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:02
Tweet 46/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:03
Tweet 47/166 👇 Next tweet 👆 Start
Tweet 48/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:04
Tweet 49/166 👇 Next tweet 👆 Start
Tweet 50/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 17:05
Tweet 51/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:06
Tweet 52/166 👇 Next tweet 👆 Start
Tweet 53/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 17:08
Tweet 54/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 17:08
Tweet 55/166 👇 Next tweet 👆 Start




@marknca tweeted at 02-Dec-2021, 17:09
Tweet 56/166 👇 Next tweet 👆 Start



@marknca tweeted at 02-Dec-2021, 17:10
Tweet 57/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:11
Tweet 58/166 👇 Next tweet 👆 Start
Tweet 59/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:12
Tweet 60/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:13
Tweet 61/166 👇 Next tweet 👆 Start
Tweet 62/166 👇 Next tweet 👆 Start
Tweet 63/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:15
Tweet 64/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:15
Tweet 65/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:16
Tweet 66/166 👇 Next tweet 👆 Start
Tweet 67/166 👇 Next tweet 👆 Start
Tweet 68/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:17
Tweet 69/166 👇 Next tweet 👆 Start




@marknca tweeted at 02-Dec-2021, 17:18
Tweet 70/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 17:18
Tweet 71/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 17:20
Tweet 72/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:21
Tweet 73/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:22
Tweet 74/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:22
Tweet 75/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:23
Tweet 76/166 👇 Next tweet 👆 Start




@marknca tweeted at 02-Dec-2021, 17:24
Tweet 77/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:24
Tweet 78/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:25
Tweet 79/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:25
Tweet 80/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:26
Tweet 81/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:26
Tweet 82/166 👇 Next tweet 👆 Start
Tweet 83/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:27
Tweet 84/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:28
Tweet 85/166 👇 Next tweet 👆 Start
Tweet 86/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 17:31
Tweet 87/166 👇 Next tweet 👆 Start



@marknca tweeted at 02-Dec-2021, 17:32
Tweet 88/166 👇 Next tweet 👆 Start
Tweet 89/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:35
Tweet 90/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:35
Tweet 91/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 17:36
Tweet 92/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:37
Tweet 93/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 17:38
Tweet 94/166 👇 Next tweet 👆 Start
Tweet 95/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 17:39
Tweet 96/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:40
Tweet 97/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:40
Tweet 98/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:41
Tweet 99/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 17:42
Tweet 100/166 👇 Next tweet 👆 Start
Tweet 101/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:42
Tweet 102/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:42
Tweet 103/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:43
Tweet 104/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:43
Tweet 105/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:43
Tweet 106/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:44
Tweet 107/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:44
Tweet 108/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:45
Tweet 109/166 👇 Next tweet 👆 Start
Tweet 110/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:46
Tweet 111/166 👇 Next tweet 👆 Start
Tweet 112/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:46
Tweet 113/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:47
Tweet 114/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:48
Tweet 115/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:49
Tweet 116/166 👇 Next tweet 👆 Start



@marknca tweeted at 02-Dec-2021, 17:50
Tweet 117/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:51
Tweet 118/166 👇 Next tweet 👆 Start
Tweet 119/166 👇 Next tweet 👆 Start
Tweet 120/166 👇 Next tweet 👆 Start
Tweet 121/166 👇 Next tweet 👆 Start



@marknca tweeted at 02-Dec-2021, 17:53
Tweet 122/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:53
Tweet 123/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 17:55
Tweet 124/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:56
Tweet 125/166 👇 Next tweet 👆 Start
Tweet 126/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 17:58
Tweet 127/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 17:58
Tweet 128/166 👇 Next tweet 👆 Start
Tweet 129/166 👇 Next tweet 👆 Start
Tweet 130/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 18:00
Tweet 131/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 18:00
Tweet 132/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 18:01
Tweet 133/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 18:02
Tweet 134/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 18:03
Tweet 135/166 👇 Next tweet 👆 Start
Tweet 136/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 18:04
Tweet 137/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 18:04
Tweet 138/166 👇 Next tweet 👆 Start
Tweet 139/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 18:05
Tweet 140/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 18:05
Tweet 141/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 18:06
Tweet 142/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 18:07
Tweet 143/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 18:08
Tweet 144/166 👇 Next tweet 👆 Start
Tweet 145/166 👇 Next tweet 👆 Start
Tweet 146/166 👇 Next tweet 👆 Start
Tweet 147/166 👇 Next tweet 👆 Start



@marknca tweeted at 02-Dec-2021, 18:10
Tweet 148/166 👇 Next tweet 👆 Start



@marknca tweeted at 02-Dec-2021, 18:13
Tweet 149/166 👇 Next tweet 👆 Start
Tweet 150/166 👇 Next tweet 👆 Start




@marknca tweeted at 02-Dec-2021, 18:15
Tweet 151/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 18:16
Tweet 152/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 18:17
Tweet 153/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 18:17
Tweet 154/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 18:18
Tweet 155/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 18:18
Tweet 156/166 👇 Next tweet 👆 Start
Tweet 157/166 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 18:19
Tweet 158/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 18:20
Tweet 159/166 👇 Next tweet 👆 Start
Tweet 160/166 👇 Next tweet 👆 Start
Tweet 161/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 18:23
Tweet 162/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 18:23
Tweet 163/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 18:23
Tweet 164/166 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 18:24
Tweet 165/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 18:24
Tweet 166/166 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 18:25
The leadership session at AWS re:Invent provide a deeper dive into a specific area of focus. Stephen Schmidt, CISO at AWS takes the stage to talk all things security.
I missed the first 10 minutes of the session and will update this post when I watch it on demand.
THe session is now available on demand so I was able to watch the first ten minutes. Here are my takeaways as a list, instead of a tweet storm;
This 👇 is the Twitter thread of my coverage of the keynote…
@marknca tweeted at 02-Dec-2021, 21:15
Tweet 2/44 👇 Next tweet 👆 Start
Tweet 3/44 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 21:19
Tweet 4/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:19
Tweet 5/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:20
Tweet 6/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:20
Tweet 7/44 👇 Next tweet 👆 Start
Tweet 8/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:21
Tweet 9/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:21
Tweet 10/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:23
Tweet 11/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:23
Tweet 12/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:24
Tweet 13/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:25
Tweet 14/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:25
Tweet 15/44 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 21:26
Tweet 16/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:27
Tweet 17/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:28
Tweet 18/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:29
Tweet 19/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:29
Tweet 20/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:30
Tweet 21/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:31
Tweet 22/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:32
Tweet 23/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:33
Tweet 24/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:35
Tweet 25/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:35
Tweet 26/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:36
Tweet 27/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:37
Tweet 28/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:38
Tweet 29/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:38
Tweet 30/44 👇 Next tweet 👆 Start
Tweet 31/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:40
Tweet 32/44 👇 Next tweet 👆 Start
Tweet 33/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:41
Tweet 34/44 👇 Next tweet 👆 Start
Tweet 35/44 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 21:42
Tweet 36/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:42
Tweet 37/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:43
Tweet 38/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:45
Tweet 39/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:46
Tweet 40/44 👇 Next tweet 👆 Start
Tweet 41/44 👇 Next tweet 👆 Start



@marknca tweeted at 02-Dec-2021, 21:49
Tweet 42/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:50
Tweet 43/44 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 21:51
Tweet 44/44 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 21:51
AWS re:Invent is always THE event in cloud. Swami Sivasubramanian took the stage to deliver the machine learning focused keynote on day three.
This 👇 is the Twitter thread of my coverage of the keynote…

@marknca tweeted at 01-Dec-2021, 16:32
Tweet 2/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 16:34
Tweet 3/141 👇 Next tweet 👆 Start
Tweet 4/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 16:35
Tweet 5/141 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2021, 16:37
Tweet 6/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:37
Tweet 7/141 👇 Next tweet 👆 Start
Tweet 8/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 16:38
Tweet 9/141 👇 Next tweet 👆 Start
Tweet 10/141 👇 Next tweet 👆 Start
Tweet 11/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 16:39
Tweet 12/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 16:40
Tweet 13/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 16:41
Tweet 14/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 16:41
Tweet 15/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:42
Tweet 16/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 16:42
Tweet 17/141 👇 Next tweet 👆 Start
Tweet 18/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 16:44
Tweet 19/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:44
Tweet 20/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 16:45
Tweet 21/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 16:46
Tweet 22/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 16:46
Tweet 23/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 16:48
Tweet 24/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:48
Tweet 25/141 👇 Next tweet 👆 Start
Tweet 26/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:49
Tweet 27/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 16:50
Tweet 28/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:50
Tweet 29/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:51
Tweet 30/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 16:52
Tweet 31/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 16:52
Tweet 32/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 16:53
Tweet 33/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:53
Tweet 34/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:54
Tweet 35/141 👇 Next tweet 👆 Start
Tweet 36/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:55
Tweet 37/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:56
Tweet 38/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:57
Tweet 39/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 16:58
Tweet 40/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 16:58
Tweet 41/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 16:59
Tweet 42/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 16:59
Tweet 43/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:00
Tweet 44/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:00
Tweet 45/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:01
Tweet 46/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 17:02
Tweet 47/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:02
Tweet 48/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 17:04
Tweet 49/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:04
Tweet 50/141 👇 Next tweet 👆 Start
Tweet 51/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:05
Tweet 52/141 👇 Next tweet 👆 Start
Tweet 53/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:06
Tweet 54/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:07
Tweet 55/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:07
Tweet 56/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:08
Tweet 57/141 👇 Next tweet 👆 Start
Tweet 58/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:09
Tweet 59/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:09
Tweet 60/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:10
Tweet 61/141 👇 Next tweet 👆 Start
Tweet 62/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:11
Tweet 63/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:12
Tweet 64/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:13
Tweet 65/141 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2021, 17:15
Tweet 66/141 👇 Next tweet 👆 Start
Tweet 67/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:17
Tweet 68/141 👇 Next tweet 👆 Start
Tweet 69/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:18
Tweet 70/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:19
Tweet 71/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:19
Tweet 72/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:20
Tweet 73/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:20
Tweet 74/141 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2021, 17:21
Tweet 75/141 👇 Next tweet 👆 Start
Tweet 76/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:22
Tweet 77/141 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2021, 17:23
Tweet 78/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 17:24
Tweet 79/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 17:25
Tweet 80/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:27
Tweet 81/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:28
Tweet 82/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 17:29
Tweet 83/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 17:32
Tweet 84/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:39
Tweet 85/141 👇 Next tweet 👆 Start
Tweet 86/141 👇 Next tweet 👆 Start
Tweet 87/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:41
Tweet 88/141 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2021, 17:44
Tweet 89/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:44
Tweet 90/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 17:45
Tweet 91/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:46
Tweet 92/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:46
Tweet 93/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:47
Tweet 94/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:47
Tweet 95/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 17:48
Tweet 96/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:48
Tweet 97/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:50
Tweet 98/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:50
Tweet 99/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 17:52
Tweet 100/141 👇 Next tweet 👆 Start
Tweet 101/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:57
Tweet 102/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:58
Tweet 103/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 17:58
Tweet 104/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 17:59
Tweet 105/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 18:00
Tweet 106/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 18:00
Tweet 107/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 18:01
Tweet 108/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 18:02
Tweet 109/141 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2021, 18:04
Tweet 110/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 18:05
Tweet 111/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 18:07
Tweet 112/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 18:08
Tweet 113/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 18:08
Tweet 114/141 👇 Next tweet 👆 Start
Tweet 115/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 18:09
Tweet 116/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 18:10
Tweet 117/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 18:11
Tweet 118/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 18:12
Tweet 119/141 👇 Next tweet 👆 Start
Tweet 120/141 👇 Next tweet 👆 Start




@marknca tweeted at 01-Dec-2021, 18:13
Tweet 121/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 18:14
Tweet 122/141 👇 Next tweet 👆 Start
Tweet 123/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 18:16
Tweet 124/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 18:16
Tweet 125/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 18:17
Tweet 126/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 18:18
Tweet 127/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 18:19
Tweet 128/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 18:19
Tweet 129/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 18:20
Tweet 130/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 18:20
Tweet 131/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 18:21
Tweet 132/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 18:22
Tweet 133/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 18:23
Tweet 134/141 👇 Next tweet 👆 Start
Tweet 135/141 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 18:24
Tweet 136/141 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 18:25
Tweet 137/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 18:26
Tweet 138/141 👇 Next tweet 👆 Start
Tweet 139/141 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 18:31
Tweet 140/141 👇 Next tweet 👆 Start
Tweet 141/141 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2021, 18:32
AWS re:Invent is always THE event in cloud. Petere DeSantis’ keynote provided a peek behind the curtain of the technology that drive AWS itself.
This 👇 is the Twitter thread of my coverage of the keynote…

@marknca tweeted at 01-Dec-2021, 23:01
Tweet 2/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:02
Tweet 3/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:03
Tweet 4/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:03
Tweet 5/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:04
Tweet 6/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:04
Tweet 7/104 👇 Next tweet 👆 Start
Tweet 8/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:05
Tweet 9/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:06
Tweet 10/104 👇 Next tweet 👆 Start
Tweet 11/104 👇 Next tweet 👆 Start
Tweet 12/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:08
Tweet 13/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:08
Tweet 14/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:10
Tweet 15/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:11
Tweet 16/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:11
Tweet 17/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:12
Tweet 18/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:13
Tweet 19/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:14
Tweet 20/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:15
Tweet 21/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:16
Tweet 22/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:16
Tweet 23/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:17
Tweet 24/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:18
Tweet 25/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:18
Tweet 26/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:19
Tweet 27/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:20
Tweet 28/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:21
Tweet 29/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:21
Tweet 30/104 👇 Next tweet 👆 Start
Tweet 31/104 👇 Next tweet 👆 Start
Tweet 32/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:24
Tweet 33/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:24
Tweet 34/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:24
Tweet 35/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:25
Tweet 36/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:26
Tweet 37/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:27
Tweet 38/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:28
Tweet 39/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:29
Tweet 40/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:29
Tweet 41/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:30
Tweet 42/104 👇 Next tweet 👆 Start
Tweet 43/104 👇 Next tweet 👆 Start
Tweet 44/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:31
Tweet 45/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:32
Tweet 46/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:33
Tweet 47/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:34
Tweet 48/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:35
Tweet 49/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:35
Tweet 50/104 👇 Next tweet 👆 Start
Tweet 51/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:37
Tweet 52/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:37
Tweet 53/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:38
Tweet 54/104 👇 Next tweet 👆 Start
Tweet 55/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:39
Tweet 56/104 👇 Next tweet 👆 Start



@marknca tweeted at 01-Dec-2021, 23:40
Tweet 57/104 👇 Next tweet 👆 Start
Tweet 58/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:42
Tweet 59/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:42
Tweet 60/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:43
Tweet 61/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:44
Tweet 62/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:44
Tweet 63/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:46
Tweet 64/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:47
Tweet 65/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:48
Tweet 66/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:48
Tweet 67/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:52
Tweet 68/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:55
Tweet 69/104 👇 Next tweet 👆 Start


@marknca tweeted at 01-Dec-2021, 23:56
Tweet 70/104 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 23:57
Tweet 71/104 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 23:58
Tweet 72/104 👇 Next tweet 👆 Start




@marknca tweeted at 01-Dec-2021, 23:59
Tweet 73/104 👇 Next tweet 👆 Start
Tweet 74/104 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 00:00
Tweet 75/104 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 00:01
Tweet 76/104 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 00:02
Tweet 77/104 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 00:02
Tweet 78/104 👇 Next tweet 👆 Start
Tweet 79/104 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 00:04
Tweet 80/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:05
Tweet 81/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:06
Tweet 82/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:06
Tweet 83/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:07
Tweet 84/104 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 00:08
Tweet 85/104 👇 Next tweet 👆 Start
Tweet 86/104 👇 Next tweet 👆 Start
Tweet 87/104 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 00:09
Tweet 88/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:10
Tweet 89/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:11
Tweet 90/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:12
Tweet 91/104 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 00:13
Tweet 92/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:14
Tweet 93/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:14
Tweet 94/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:15
Tweet 95/104 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 00:16
Tweet 96/104 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 00:17
Tweet 97/104 👇 Next tweet 👆 Start


@marknca tweeted at 02-Dec-2021, 00:18
Tweet 98/104 👇 Next tweet 👆 Start
Tweet 99/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:20
Tweet 100/104 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 00:20
Tweet 101/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:21
Tweet 102/104 👇 Next tweet 👆 Start
@marknca tweeted at 02-Dec-2021, 00:22
Tweet 103/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:22
Tweet 104/104 👇 Next tweet 👆 Start

@marknca tweeted at 02-Dec-2021, 00:23
Corey Quinn interviews key AWS leaders in a casual settings. This time he sits down with Bill Vass, VP Technology & Engineering.
This 👇 is the Twitter thread of my coverage of the keynote…

@marknca tweeted at 01-Dec-2021, 21:03
Tweet 2/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:04
Tweet 3/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:06
Tweet 4/20 👇 Next tweet 👆 Start

@marknca tweeted at 01-Dec-2021, 21:08
Tweet 5/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:08
Tweet 6/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:09
Tweet 7/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:10
Tweet 8/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:12
Tweet 9/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:13
Tweet 10/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:15
Tweet 11/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:16
Tweet 12/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:17
Tweet 13/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:18
Tweet 14/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:19
Tweet 15/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:21
Tweet 16/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:22
Tweet 17/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:23
Tweet 18/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:25
Tweet 19/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:27
Tweet 20/20 👇 Next tweet 👆 Start
@marknca tweeted at 01-Dec-2021, 21:27
AWS re:Invent is always THE event in cloud. Adam Selipsky gave his first keynote as the CEO of AWS. Here’s what he covered during this two hour session.
This 👇 is the Twitter thread of my coverage of the keynote…

@marknca tweeted at 30-Nov-2021, 16:31
Tweet 2/111 👇 Next tweet 👆 Start
Tweet 3/111 👇 Next tweet 👆 Start




@marknca tweeted at 30-Nov-2021, 16:33
Tweet 4/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 16:34
Tweet 5/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 16:34
Tweet 6/111 👇 Next tweet 👆 Start
Tweet 7/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 16:36
Tweet 8/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 16:36
Tweet 9/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 16:37
Tweet 10/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 16:38
Tweet 11/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 16:39
Tweet 12/111 👇 Next tweet 👆 Start
Tweet 13/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 16:39
Tweet 14/111 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2021, 16:41
Tweet 15/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 16:42
Tweet 16/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 16:43
Tweet 17/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 16:44
Tweet 18/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 16:45
Tweet 19/111 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2021, 16:46
Tweet 20/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 16:46
Tweet 21/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 16:47
Tweet 22/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 16:48
Tweet 23/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 16:49
Tweet 24/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 16:50
Tweet 25/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 16:51
Tweet 26/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 16:52
Tweet 27/111 👇 Next tweet 👆 Start
Tweet 28/111 👇 Next tweet 👆 Start
Tweet 29/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 16:55
Tweet 30/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 16:56
Tweet 31/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 16:57
Tweet 32/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 16:58
Tweet 33/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 16:59
Tweet 34/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 17:00
Tweet 35/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:01
Tweet 36/111 👇 Next tweet 👆 Start
Tweet 37/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 17:03
Tweet 38/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:04
Tweet 39/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:05
Tweet 40/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 17:05
Tweet 41/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 17:06
Tweet 42/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:07
Tweet 43/111 👇 Next tweet 👆 Start
Tweet 44/111 👇 Next tweet 👆 Start
Tweet 45/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:09
Tweet 46/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:10
Tweet 47/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:10
Tweet 48/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:11
Tweet 49/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:13
Tweet 50/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:14
Tweet 51/111 👇 Next tweet 👆 Start
Tweet 52/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:15
Tweet 53/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:16
Tweet 54/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 17:17
Tweet 55/111 👇 Next tweet 👆 Start
Tweet 56/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:20
Tweet 57/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:21
Tweet 58/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:23
Tweet 59/111 👇 Next tweet 👆 Start
Tweet 60/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:24
Tweet 61/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:25
Tweet 62/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:25
Tweet 63/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:26
Tweet 64/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:29
Tweet 65/111 👇 Next tweet 👆 Start
Tweet 66/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 17:31
Tweet 67/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:32
Tweet 68/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:35
Tweet 69/111 👇 Next tweet 👆 Start
Tweet 70/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:37
Tweet 71/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 17:38
Tweet 72/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:39
Tweet 73/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:40
Tweet 74/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:40
Tweet 75/111 👇 Next tweet 👆 Start
Tweet 76/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:42
Tweet 77/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:43
Tweet 78/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:44
Tweet 79/111 👇 Next tweet 👆 Start
Tweet 80/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:46
Tweet 81/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:49
Tweet 82/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:51
Tweet 83/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:52
Tweet 84/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:52
Tweet 85/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:53
Tweet 86/111 👇 Next tweet 👆 Start
Tweet 87/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:55
Tweet 88/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 17:59
Tweet 89/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 17:59
Tweet 90/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 18:00
Tweet 91/111 👇 Next tweet 👆 Start
Tweet 92/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 18:02
Tweet 93/111 👇 Next tweet 👆 Start
Tweet 94/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 18:02
Tweet 95/111 👇 Next tweet 👆 Start
@marknca tweeted at 30-Nov-2021, 18:03
Tweet 96/111 👇 Next tweet 👆 Start
Tweet 97/111 👇 Next tweet 👆 Start
Tweet 98/111 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2021, 18:09
Tweet 99/111 👇 Next tweet 👆 Start



@marknca tweeted at 30-Nov-2021, 18:12
Tweet 100/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 18:13
Tweet 101/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 18:14
Tweet 102/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 18:16
Tweet 103/111 👇 Next tweet 👆 Start
Tweet 104/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 18:18
Tweet 105/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 18:24
Tweet 106/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 18:25
Tweet 107/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 18:28
Tweet 108/111 👇 Next tweet 👆 Start

@marknca tweeted at 30-Nov-2021, 18:29
Tweet 109/111 👇 Next tweet 👆 Start


@marknca tweeted at 30-Nov-2021, 18:31
Amazon Inspector first launched in 2015. Now in 2021, it’s re-launching with a brand new architecture and a host of new features.
This is a vulnerability management service. What that means is that Inspector tries to find software vulnerabilities with your Amazon EC2 instances and container images stored in Amazon ECR (the Elastic Container Registry) and bring them to your attention.
In this version 2 edition of the service, the goal appears to have been “reduce friction.” When enabled, Inspector will automatically detect new resources (instances and registries) and where possible, start to scan then continuously.
Based on my initial experiences, it delivers. It’s truly a delightful experience.
During a scan, Inspector looks for operating system and application vulnerabilities and small number of potential misconfigurations. If it detects an issue, it then issues a finding.
A finding contains a host of details to help you figure out what you want to do with this particular vulnerability.
Sometimes, you’ll simply accept that it’s there and move on. Other times, you’ll patch the software removing the issue. And sometimes, you’ll use another security control—like AWS WAF (a web application firewall)—to prevent anyone from exploiting the vulnerability.
There are only two issues with the new service.
The first is potential huge for users of Amazon Inspector Classic, The new version of the service supports a wide variety of Linux distributions but Windows support was (temporarily?) dropped.
The second issue is reasonably simple. IN order to scan an EC2 instance, that instance must be able to access AWS Systems Manager. The easier way to do this is by using the AmazonSSMManagedInstanceCore policy in AWS IAM. This policy is the simplest way to grant the required permissions.
Amazon Inspector prices based on the number of container image scans run and average number of EC2 instances scanned.
Like any AWS service, it’s important to understand the pricing structure and how your usage will map to it. This service can be pricey at scale. But remember to evaluate that cost vs. the additional protect it offers to your builds.
In the video above 👆, I walk through setting the service up and scanning some resources. That demo will give you an idea of how easy the service is to use (very) and the value it can provide you and your team (a lot).
The new Amazon Inspector is generally available today in most AWS regions with a 15 day free trial. Simply visit the Amazon Inspector Management Console to get started.
]]>
Here are the top AWS announcements leading up to and during AWS re:Invent 2021. I've rated these announcements —
and for some, I've added a bit of commentary.
For the really big or impactful announcements, I'll link to more in-depth content where appropriate. I'm also tracking all of the announcements if you want the fire hose.
My take: This is a cool feature to help move your traditional databases into a data-specific data store in the AWS Cloud. It's smart move that helps teams get to a cloud native solution faster.
AWS Database Migration Service (AWS DMS) is a service that helps you migrate databases to AWS quickly and securely. AWS DMS Fleet Advisor is a new feature of AWS DMS that allows you to quickly build a database and analytics migration plan by automating the discovery and analysis of your fleet.
AWS DMS Fleet Advisor is intended for users looking to migrate a large number of database and analytic servers to AWS.
Amazon SageMaker Serverless Inference is a new inference option that enables you to easily deploy machine learning models for inference without having to configure or manage the underlying infrastructure. Simply select the serverless option when deploying your machine learning model, and Amazon SageMaker automatically provisions, scales, and turns off compute capacity based on the volume of inference requests.
With SageMaker Serverless Inference, you pay only for the duration of running the inference code and the amount of data processed, not for idle time.
Amazon SageMaker Inference Recommender helps you choose the best available compute instance and configuration to deploy machine learning models for optimal inference performance and cost.
My take: This will help reduce the time to label data at scale and speed up machine learning projects.
Today, we are excited to announce the general availability of Amazon SageMaker Ground Truth Plus, a new turnkey data labeling servicethat enables you to create high-quality training datasets quickly and reduces costs by up to 40%.
We are excited to announce the preview of automatic chatbot designer in Amazon Lex, enabling developers to automatically design chatbots from conversation transcripts in hours rather than weeks. Amazon Lex helps you build, test, and deploy chatbots and virtual assistants on contact center services (such as Amazon Connect), websites, and messaging channels (such as Facebook Messenger).
The automatic chatbot designer enhances the usability of Amazon Lex by automating conversational design, minimizing developer effort and reducing the time it takes to design a chatbot.
Starting today, AWS Transit Gateway supports intra-region peering, giving you the ability to establish peering connections between multiple Transit Gateways in the same AWS Region. With this change, different units in your organization can deploy their own Transit Gateways, and easily interconnect them resulting in less administrative overhead and greater autonomy of operation.
AWS Shield Advanced now automatically protects web applications by blocking application layer (Layer 7) DDoS events with no manual intervention needed by you or the AWS Shield Response Team (SRT). When you protect your resources with AWS Shield Advanced and enable automatic application layer DDoS mitigation, Shield Advanced will identify patterns associated with layer 7 DDoS events and isolate this anomalous traffic by automatically creating AWS WAF rules in your web access control lists (ACLs).
These rules can be implemented in count mode to observe how they will impact resource traffic and then deployed in block mode.
These capabilities enable you to quickly respond to and mitigate DDoS events that threaten the availability of your applications.
My take: A nice user experience improvement to help get you off those proprietary databases.
AWS Database Migration Service (AWS DMS) is pleased to announce the launch of AWS DMS Studio, a new service console that makes it easy to manage database migrations from start to finish. AWS DMS Studio accelerates and simplifies migrations by integrating tools for each phase of the migration journey from assessment to conversion to migration.
AWS DMS Studio integrates AWS DMS Fleet Advisor to inventory and analyzes your database and analytics fleet, AWS Schema Conversion Tool (SCT) to convert database schema and application code, and AWS DMS to migrate your data.
At each step of the migration, AWS DMS Studio assists you by providing contextual resources such as documentation and guidance on engaging migration experts where needed.
My take: A great move helping people get the education needed to start a career in machine learning.
The AWS Artificial Intelligence (AI) and Machine Learning (ML) Scholarship program, in collaboration with Intel and Udacity, provides students who self-identify as underserved and underrepresented in tech educational content, career mentorship programs, and 2,500 scholarships annually as part of a commitment to a more diverse future AI & ML workforce.
Amazon Relational Database Service (Amazon RDS) Custom is a managed database service for legacy, custom, and packaged applications that require access to the underlying OS and DB environment. Amazon RDS Custom is now available for the SQL Server database engine. Amazon RDS Custom for SQL Server automates setup, operation, and scaling of databases in the cloud while granting access to the database and underlying operating system to configure settings, install drivers, and enable native features to meet the dependent application's requirements.
My take: DevOps Guru continues to improve. This time extending into the database sphere.
Amazon DevOps Guru for RDS is a new Machine Learning (ML) powered capability for Amazon Relational Database Service (Amazon RDS) that automatically detects and diagnoses database performance and operational issues, enabling you to resolve bottlenecks in minutes rather than days. Amazon DevOps Guru for RDS is a feature of Amazon DevOps Guru, which detects operational and performance related issues for all Amazon RDS engines and dozens of other resource types.
DevOps Guru for RDS expands upon the existing capabilities of DevOps Guru to detect, diagnose, and provide remediation recommendations for a wide variety of database-related performance issues, such as resource over-utilization and misbehavior of SQL queries.
When an issue occurs, DevOps Guru for RDS immediately notifies developers and DevOps engineers and provides diagnostic information, details on the extent of the problem, and intelligent remediation recommendations to help customers quickly resolve the issue.
My take: This expansion of access analyzer will help improve your overall network security posture within your VPC.
Amazon VPC Network Access Analyzer is a new feature that enables you to identify unintended network access to your resources on AWS. Using Network Access Analyzer, you can verify whether network access for your Virtual Private Cloud (VPC) resources meets your security and compliance guidelines.
With Network Access Analyzer, you can assess and identify improvements to your cloud security posture.
Additionally, Network Access Analyzer makes it easier for you to demonstrate that your network meets certain regulatory requirements.
Amazon VPC IP Address Manager (IPAM) is a new feature that makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads. With IPAM's automated workflows, network administrators can more efficiently manage IP addresses.
Amazon Textract, a machine learning service that makes it easy to extract text and data from any document or image, now offers specialized support to extract data from identity documents, such U.S. Driver Licenses and U.S. Passports. You can extract implied fields like name and address, as well as explicit fields like Date of Birth, Date of Issue, Date of Expiry, ID #, ID Type, and more in the form of key-value pairs.
Until today, current OCR based solutions were limited, and did not offer the ability to extract all the required fields accurately due to rich background images or the ability to recognize names and addresses, as well as the fields associated with them (e.g., Washington state ID lists home address with the key "8"), or support ID designs and formats that varied by country or state.
My take: More functionality into an already jam packed tool for machine learning. Excellent.
Amazon SageMaker Studio is the first fully integrated development environment (IDE) for machine learning (ML). It provides a single, web-based visual interface where you can perform all ML development steps required to prepare data, as well as to build, train, and deploy models.
We recently introduced the ability to visually browse and connect to Amazon EMR clusters right from the SageMaker Studio notebook.
Starting today, you can now monitor and debug your Apache Spark jobs running on EMR right from SageMaker Studio notebooks with just a click.
Additionally, you can now discover, connect to, create, terminate and manage EMR clusters directly from SageMaker Studio.
The built-in integration with EMR therefore enables you to do interactive data preparation and machine learning at peta-byte scale right within the single universal SageMaker Studio notebook.
My take: Sign up with your email and you can start learning machine learning tools and techniques right away. This greatly reduces the barriers to accessing this technology.
Introducing Amazon SageMaker Studio Lab is a free, no-configuration service that allows developers, academics, and data scientists to learn and experiment with machine learning.
Amazon SageMaker Pipelines, a fully managed service that enables you to create, automate, and manage end-to-end machine learning (ML) workflows, now supports integration with Amazon SageMaker Model Monitor and Amazon SageMaker Clarify. With these integrations, you can easily incorporate model quality and bias detection in your ML workflow. The increased automation can help reduce your operational burden in building and managing ML models.
Amazon SageMaker now offers enhancements to the machine learning (ML) lineage tracking capability that enables customers to track and query the lineage of artifacts such as data, features, and models across an ML workflow. Now, customers can retrieve the end-to-end lineage graph spanning the entire workflow from data preparation to model deployment through a single query.
This feature eliminates undifferentiated heavy lifting needed to retrieve lineage information one workflow step at a time and manually stitch them all together.
Customers can also retrieve lineage information for segments of the workflow by defining a step as the focal point and querying the lineage of the steps that are upstream or downstream of that focal point.
For instance, customers can define a model as the focal entity and retrieve the location of the raw data set from which features were extracted to train that model.
My take: This new tier makes it easier to keep data in DynamoDB for longer. No more need to shuffle it out to reduce costs while increasing operational overhead.
Amazon DynamoDB announces the new Amazon DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class, which helps you reduce your DynamoDB costs by up to 60 percent for tables that store infrequently accessed data. The DynamoDB Standard-IA table class is ideal for use cases that require long-term storage of data that is infrequently accessed, such as application logs, old social media posts, e-commerce order history, and past gaming achievements.
My take: SMart business move by AWS. This will help larger organizations (like governments) move & modernize some critical workloads...finally.
AWS Mainframe Modernization is a unique platform for mainframe migration and modernization. It allows customers to migrate and modernize their on-premises mainframe workloads to a managed and highly available runtime environment on AWS. This service currently supports two main migration patterns – replatforming and automated refactoring – allowing customers to select their best-fit migration path and associated toolchains based on their migration assessment results.
My take: This opens up Amazon SageMaker to a lot more audiences. This could be THE launch of AWS re:Invent 2021.
Amazon SageMaker Canvas is a new capability of Amazon SageMaker that enables business analysts to create accurate machine learning (ML) models and generate predictions using a visual, point-and-click interface, no coding required.
My take: This is how I want to interact with Kafka. Honestly, should've been this way from the start. Super excited about this one.
Today we announced Amazon MSK Serverless in public preview, a new type of Amazon MSK cluster that makes it easier for developers to run Apache Kafka without having to manage its capacity. MSK Serverless automatically provisions and scales compute and storage resources and offers throughput-based pricing, so you can use Apache Kafka on demand and pay for the data you stream and retain.
My take: All the power of EMR without worrying about the infrastructure? Yes, please.
We are happy to announce the preview of Amazon EMR Serverless, a new serverless option in Amazon EMR that makes it easy and cost-effective for data engineers and analysts to run petabyte-scale data analytics in the cloud. Amazon EMR is a cloud big data platform used by customers to run large-scale distributed data processing jobs, interactive SQL queries, and machine learning applications using open-source analytics frameworks such as Apache Spark, Apache Hive, and Presto.
With EMR Serverless, customers can run applications built using these frameworks with a few clicks, without having to configure, optimize, or secure clusters.
EMR Serverless automatically provisions and scales the compute and memory resources required by the application, and customers only pay for the resources they use.
My take: About time, this actually solves a common problem in service sprawl.
You can now use AWS Control Tower to deny services and operations in your Control Tower environments for the AWS Region(s) of your choice. Region deny capabilities complement existing AWS Control Tower Region selection and Region deselection features, providing you with the capabilities to address compliance and regulatory requirements while improving cost efficiency of expanding into additional Regions.
My take: Sounds super simple, it's probably a nightmare behind the scenes. Thankfully, builders don't have to worry about that and can just enjoyed the added safety & security!
AWS Lake Formation is excited to announce the general availability of three new capabilities that simplify building, securing, and managing data lakes. First, Lake Formation Governed Tables, a new type of table on Amazon S3, that simplifies building resilient data pipelines with multi-table transaction support.
As data is added or changed, Lake Formation automatically manages conflicts and errors to ensure that all users see a consistent view of the data.
This eliminates the need for customers to create custom error handling code or batch their updates.
Second, Governed Tables monitor and automatically optimize how data is stored so query times are consistent and fast.
Third, in addition to table and columns, Lake Formation now supports row and cell-level permissions, making it more easily to restrict access to sensitive information by granting users access to only the portions of the data they are allowed to see.
Governed Tables, row and cell-level permissions are now supported through Amazon Athena, Amazon Redshift Spectrum, AWS Glue, and Amazon QuickSight.
My take: This service is intriguing. It could be a revolutionary way to deploy customized 5G solutions for organizations. Time will tell...
Today, we are announcing the preview of AWS Private 5G, a new managed service that helps enterprises set up and scale private 5G mobile networks in their facilities in days instead of months. With just a few clicks in the AWS console, customers specify where they want to build a mobile network and the network capacity needed for their devices.
AWS then delivers and maintains the small cell radio units, servers, 5G core and radio access network (RAN) software, and subscriber identity modules (SIM cards) required to set up a private 5G network and connect devices.
AWS Private 5G automates the setup and deployment of the network and scales capacity on demand to support additional devices and increased network traffic.
There are no upfront fees or per-device costs with AWS Private 5G, and customers pay only for the network capacity and throughput they request.
Today, we are announcing the preview of AWS Trainium-based Amazon EC2 Trn1 instances. AWS Trainium, is the second machine learning chip built by AWS that is optimized for high-performance deep learning training.
Today, we are announcing the next generation storage optimized Amazon EC2 Im4gn and Is4gen instances. These instances are built on the AWS Nitro System and are powered by AWS Graviton2 processors. They feature up to 30TB of storage with the new AWS Nitro SSDs that are custom-designed by AWS to maximize the storage performance of I/O intensive workloads such as SQL/NoSQL databases, search engines, distributed file systems and data analytics which continuously read and write from the SSDs in a sustained manner.
AWS Nitro SSDs enable up to 60% lower latency and up to 75% reduced latency variability in Im4gn and Is4gen instances compared to the third generation of storage optimized instances.
These instances maximize the number of transactions processed per second (TPS) for I/O intensive workloads such as relational databases (e.g.
MySQL, MariaDB, PostgreSQL), and NoSQL databases (KeyDB, ScyllaDB, Cassandra) which have medium-large size data sets and can benefit from high compute performance and high network throughput.
They are also an ideal fit for search engines, and data analytics workloads that require very fast access to data sets on local storage.
My take: Woohoo, Graviton 3!
Starting today, the new Amazon EC2 C7g instances powered by the latest generation custom-designed AWS Graviton3 processors are available in preview. Amazon EC2 C7g instances will provide the best price performance in Amazon EC2 for compute-intensive workloads such as high performance computing (HPC), gaming, video encoding, and CPU-based machine learning inference.
These instances are the first in the cloud to feature the cutting edge DDR5 memory technology, which provides 50% more bandwidth compared to DDR4 memory.
C7g instances provide 20% higher networking bandwidth compared to previous generation C6g instances based on AWS Graviton2 processors.
They also support Elastic Fabric Adapter (EFA) for applications such as high performance computing that require high levels of inter-node communication.
My take: This service has a lot of potential and could help IoT services innovate faster. However, like any new AWS service, it'll take a while to full hit it's stride.
Today, we are announcing AWS IoT TwinMaker, a new service that makes it faster and easier for developers to create and use digital twins of real-world systems to monitor and optimize operations. Digital twins are virtual representations of physical systems such as buildings, factories, production lines, and equipment that are regularly updated with real-world data to mimic the structure, state, and behavior of the systems they represent.
Although digital twin use cases are many and diverse, most customers want to get started by easily using their existing data to get a deeper understanding of their operations.
My take: Kinesis was already great, now having a serverless version make it even more flexible as a tool for builders.
Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store streaming data at any scale. Kinesis Data Streams On-Demand is a new capacity mode for Kinesis Data Streams, capable of serving gigabytes of write and read throughput per minute without capacity planning.
You can create a new on-demand data stream or convert an existing data stream into the on-demand mode with a single-click and never have to provision and manage servers, storage, or throughput.
In the on-demand mode you pay for throughput consumed rather than for provisioned resources, making it easy to balance costs and performance.
Amazon Athena users can now use AWS Lake Formation to configure fine-grained access permissions and read from ACID-compliant tables. Amazon Athena makes it simple for users to analyze data in Amazon S3-based data lakes to help ensure that users only have access to data to which they're authorized and that their queries are reliable in the face of changes to the underlying data can be a complex task.
Starting today, you can use Recycle Bin for EBS Snapshots to recover from accidental snapshot deletions to meet your business continuity needs. Previously, if you accidentally deleted a snapshot, you would have to roll back to a snapshot from an earlier point in time, increasing your recovery point objective.
With Recycle Bin, you can specify a retention time period and recover a deleted snapshot before the expiration of the retention period.
A recovered snapshot retains its attributes such as tags, permissions, and encryption status, which it had prior to deletion, and can be used immediately for creating volumes.
Snapshots that are not recovered from the Recycle Bin are permanently deleted upon expiration of the retention time.
Metrics Insights is a new feature from Amazon CloudWatch that is in preview. As a fast, flexible, SQL based query engine, Metrics Insights enables developers, operators, systems engineers, and cloud solutions architects to identify trends and patterns across millions of operational metrics in real time and helps you use these insights to reduce time to resolution.
With Metrics Insights, you can gain better visibility on your infrastructure and large scale application performance with flexible querying and on-the-fly metric aggregations.
Use Metrics Insights and other CloudWatch features to monitor your AWS and hybrid environments, and to respond to operational problems promptly.
Amazon CloudWatch Evidently is a new capability which helps application developers safely validate new features across the full application stack. Developers can use Evidently to conduct experiments on new application features and identify unintended consequences, thereby reducing risk. When launching new features, developers can expose the features to a subset of users, monitor key metrics such as page load times and conversions, then safely dial up traffic for general use.
Amazon CloudWatch Evidently is part of CloudWatch’s Digital Experience Monitoring capabilities along with Amazon CloudWatch Synthetics and Amazon CloudWatch RUM.
My take: The ability to add your own lens to this tool will really open up it's use. This feature allows you to define what you want your teams to be asking and investigating about their builds. A lot of power here...
The AWS Well-Architected Tool now offers the ability for customers to create their own custom lenses.
My take: This functionality is enabled by a complete re-architecting of the Amazon Inspector service. The near real-time results will be a game changer for most builders.
The new Amazon Inspector is a vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure is generally available, globally. Amazon Inspector has been completely rearchitected to automate vulnerability management and deliver near real-time findings to minimize the time to discover new vulnerabilities.
My take: Is AWS trying to bring about the robot apocalypse?
AWS IoT RoboRunner is a new robotics service that makes it easier for enterprises to build and deploy applications that help fleets of robots work together seamlessly. With AWS IoT RoboRunner, it is easier to build applications that make it possible to interoperate and orchestrate robots from a single view by reducing the complex development work required to connect robots to each other and the rest of your industrial software systems.
Today, we are excited to announce the First Annual AWS BugBust re:Invent challenge. Java and Python developers of all skill levels, can compete to fix as many software bugs as possible to earn points and climb the global leaderboard. There will be an array of prizes, from hoodies and fly swatters to Amazon Echo Dots, available to participants who meet certain milestones in the challenge.
There’s also the coveted title of “Ultimate AWS BugBuster” accompanied by a cash prize of $1500 for whomever earns the most points by squashing bugs during the event.
My take: ABOUT TIME...and sadly, still not enough of a change.
Effective December 1, 2021, AWS is making two pricing changes for data transfer out to the internet. Each month, the first terabyte of data transfer out of Amazon Cloudfront, the first 10 million HTTP/S requests, and the first 2 million CloudFront Functions invocations will be free.
Free data transfer out of CloudFront is no longer limited to the first 12 months.
In addition, the first 100 gigabytes per month of data transfer out from all AWS Regions (except China and GovCoud) will be free.
Free data transfer out from AWS Regions is also no longer limited to the first 12 months.
These changes will replace the existing data transfer and CloudFront AWS Free Tier offerings, and AWS customers will see these changes automatically reflected in their AWS bills going forward.
All AWS customers will benefit from these pricing changes, and millions of customers will see no data transfer charges as a result.
AWS Lambda now provides content filtering options for SQS, DynamoDB and Kinesis as event sources. With event pattern content filtering, customers can write complex rules so that their Lambda function is only triggered by SQS, DynamoDB, or Kinesis under filtering criteria you specify.
This helps reduce traffic to customers’ Lambda functions, simplifies code, and reduces overall cost.
Now on EC2 Image Builder, customers can share their Amazon Machine Images (AMIs) with AWS Organizations and Organizational Units (OUs) in the image distribution phase of their build process. As their organization structure changes, customers no longer have to manually update AMI permissions for individual AWS accounts in their organization.
Customers can create OUs within AWS Organizations and manage AMI permissions for AWS accounts within those OUs.
AWS today announced AWS WAF Captcha to help block unwanted bot traffic by requiring users to successfully complete challenges before their web request are allowed to reach AWS WAF protected resources. Captcha is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart and is commonly used to distinguish between robotic and human visitors to prevent activity like web scraping, credential stuffing, and spam.
You can configure AWS WAF rules to require WAF Captcha challenges to be solved for specific resources that are frequently targeted by bots such as login, search, and form submissions.
You can also require WAF Captcha challenges for suspicious requests based on the rate, attributes, or labels generated from AWS Managed Rules, such as AWS WAF Bot Control or the Amazon IP Reputation list.
WAF Captcha challenges are simple for humans while remaining effective against bots.
WAF Captcha includes an audio version and is designed to meet WCAG accessibility requirements.
My take: A smart expansion of SOC coverage.
AWS Single Sign-On (AWS SSO) is now in scope for AWS SOC 1 , SOC 2, and SOC 3 reports. You can now use AWS SSO in applications requiring audited evidence of the controls in our System and Organization Controls (SOC) reporting. For example, if you use AWS to manage access to accounts and applications, you can use the SOC reports to help meet your compliance requirements for those use cases. AWS SOC reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives.
AWS Proton now supports the definition of infrastructure in HashiCorp Configuration Language (HCL) and the provisioning of infrastructure using Terraform Open Source through a git-based workflow. Platform teams define AWS Proton templates using Terraform modules, and AWS Proton leverages the customer-managed Terraform automation to provision or update the infrastructure.
Customers can use Terraform as their infrastructure definition and provisioning tool, and AWS Proton will ensure that modules are used consistently and kept up to date.
AWS Proton now allows customers to sync their Proton templates from a git repository. Platform teams can create AWS Proton templates based on AWS CloudFormation and Terraform templates uploaded to a git repository. AWS Proton is designed to automatically sync and create a new version when changes are made and committed to the git repository.
With this new feature, platform and development teams can eliminate manual steps and and reduce the chance for human error.
Amazon Web Services (AWS) has announced the general availability of Enterprise On-Ramp, a new Support tier designed for production and business-critical needs to help customers that are starting their cloud journey and need expert guidance to grow and optimize on cloud. With Enterprise On-Ramp, customers can solve cloud-related challenges with 24/7 access to AWS experts whether by phone or live chat, share their screen, and get support to improve issue resolution and eliminate the frustration of back-and-forth emails.
Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Amazon Translate now supports Amazon Virtual Private Cloud (VPC) endpoints via AWS PrivateLink so you can securely initiate API calls to Amazon Translate from within your VPC and without using public IPs.
AWS PrivateLink provides private connectivity between VPCs and AWS services, without ever leaving the Amazon network, significantly simplifying your internal network architecture.
You no longer need to use an Internet Gateway, Network Address Translation (NAT) devices or firewall proxies to connect to Amazon Translate.
Amazon QuickSight now supports dataset versioning, which allows dataset owners to understand historical changes within a dataset, preview a specific version, or revert back to a previous version if needed. Dataset versions can be viewed and tracked via the UI, allowing dataset owners to view versions and switch to a specific version via UI. Dataset Versions gives dataset authors the confidence to experiment with their content, knowing that their older versions are available and that they easily can revert back to it when required.
Amazon Managed Grafana announces new data source plugins for Amazon Athena and Amazon Redshift, enabling customers to query, visualize, and alert on their Athena and Redshift data from Amazon Managed Grafana workspaces. Amazon Managed Grafana now also supports CloudFlare, Zabbix, and Splunk Infrastructure Monitoring data sources as well as the Geomap panel visualization and open source Grafana version 8.2.
Fleet Manager, a feature in AWS Systems Manager (SSM) that helps IT Admins streamline and scale their remote server management processes, now enables a console-based management experience for Windows instances. This new feature provides customers a full graphical interface to setup secure connections to and manage Windows instances. You no longer need to install additional software, set up additional servers, or open direct inbound access to ports on the instance.
My take: This should be rated lower but it also should've shipped a long time ago. Good upgrade, makes things easier for sure.
Today, we launched usability improvements for the navigation bar in the AWS Management Console. The improvements include a customizable favorites bar, updates to the services menu, and visual updates for consistency and accessibility. The new favorites bar appears when you have selected at least one service as a favorite in the services menu. It also supports an unlimited number of favorites that can be organized with drag and drop.
The updated services menu groups services by category and provides an A to Z listing of all services.
My take: I like to see the expansion of OpenTelemetry. It's a great project and much easier to ingest data when it's all in the same format.
Amazon Elastic Container Service (Amazon ECS) now enables customers to quickly get started to monitor and debug their applications with traces and custom metrics using AWS Distro for OpenTelemetry (ADOT). This feature allows Amazon ECS customers to use the console to enable metrics and traces collection, and then export to Amazon CloudWatch, Amazon Managed Service for Prometheus, and AWS X-Ray with just few clicks. This experience simplifies a multi-step manual process of configuring ADOT in task definitions, and enables customers to solve application availability and performance issues.
Today, AWS CloudFormation StackSets announces the capability to import existing CloudFormation stacks into a stack set. StackSets extend the functionality of stacks letting you create, update, or delete stacks across multiple AWS accounts and regions with a single operation.
You can now bring your existing CloudFormation stacks into the management purview of a new or an existing stack set.
This will let you create resources, applications or environments across your AWS Organization and AWS Regions efficiently.
You can subsequently avoid the process of manually replicating and managing the infrastructure in each account and region individually.
Today, we are announcing the public preview of Amazon Linux 2022 (AL2022), Amazon's new general purpose Linux for AWS that is designed to provide a secure, stable, and high-performance execution environment to develop and run your cloud applications. Starting with AL2022, a new Amazon Linux major version will be available every two years and each version will be supported for five years. Customers will also be able to take advantage of quarterly updates via minor releases and use the latest software for their applications.
Finally, AL2022 provides the ability to lock to a specific version of the Amazon Linux package repository giving customers control over how and when they absorb updates.
Starting today, Amazon Neptune announced the general availability of general-purpose T4g and memory-optimized R6g database instances powered by the AWS Graviton2 processor. AWS Graviton2-based instances deliver up to 40% better price performance over comparable current generation x86-based instances for a variety of workloads. Customers running graph workloads using Apache TinkerPop Gremlin, openCypher, or W3C SPARQL 1.1 query languages can expect to see significant improvements in query latency at a lower cost in comparison to x86-based instances of equivalent instance size.
Amazon Relational Database Service (Amazon RDS) on AWS Outposts now supports creating backups locally on AWS Outposts with Amazon S3 support. You can create backups of your Amazon RDS databases running on AWS Outposts to the same Outpost or to the AWS Region of your Outpost, allowing you to maintain your data residency requirements while giving you flexibility for maintaining your data recovery solutions.
CloudFormation support will be coming soon.
Amazon MemoryDB for Redis now supports AWS Graviton2-based T4 instances. T4g is the next generation burstable general-purpose DB instance type that provides a baseline level of CPU performance, with the ability to burst CPU usage at any time for as long as required.
This instance type offers a balance of compute, memory, and network resources for a broad spectrum of general purpose workloads.
Amazon EventBridge expands support to all Regions, except for AWS GovCloud (US) and China, as a destination for its cross-Region event bus as a target functionality launched in April’2021 (initially launched with 3 destination Regions - US East (N. Virgina), US West (Oregon) and Europe(Ireland)). This will allow customers to consolidate events in one central Region from any Region. This makes it easier for customers to centralize their events for auditing and monitoring purposes or replicate events from source to destinations Regions to help synchronize data across Regions.
Amazon Elastic Container Service (Amazon ECS) today open-sourced the build scripts that Amazon ECS uses to build the Amazon ECS-optimized Amazon Machine Image (AMI). These build scripts are now available on GitHub as an open-source project under the Apache license 2.0.
Customers can use these build scripts to build custom AMIs with security, monitoring, and compliance controls based on their organization’s requirements while using the same components as the Amazon ECS-optimized AMI.
Starting today, customers can run macOS Monterey (12.0.1) as Amazon Machine Images (AMIs) on Amazon EC2 Mac instances. Apple macOS Monterey is the current major macOS release from Apple, and introduces multiple new capabilities and performance improvements over prior macOS versions.
macOS Monterey supports running Xcode versions 13.0 and later, which include the latest SDKs for iOS, iPadOS, macOS, tvOS, and watchOS.
Starting today, customers can dynamically attach and detach Amazon Elastic Block Storage (EBS) volumes on their running Amazon EC2 Mac instances. Prior to today, customers attaching or detaching EBS volumes on EC2 Mac instances needed to reboot their instances for revised EBS configuration to be reflected within their macOS guest environments.
Now with this capability, customers do not need to trigger an instance reboot and wait for it to complete when attaching or detaching EBS volumes on EC2 Mac instances.
You can now use Amazon CloudWatch Lambda Insights to monitor, troubleshoot, and optimize the performance of AWS Lambda functions powered by AWS Graviton2 processor. With CloudWatch Lambda Insights you have access to automated dashboards summarizing the performance and health of your Lambda functions.
My take: Super handy if you're using Athena...and you are, right?
You can now manage AWS Step Functions workflows from the Amazon Athena console, making it easier to build scalable data processing pipelines, execute queries based on custom business logic, automate administrative and alerting tasks, and more.
My take: Finally, a speed increase for CloudFormation!
Today, AWS CloudFormation StackSets announces the capability to execute multiple operations for simultaneous execution. StackSets extends the functionality of CloudFormation stacks by letting you create, update, or delete stacks across multiple AWS accounts and Regions with a single operation.
You can now submit more than one operation per stack set to be executed concurrently.
This capability will enable you to reduce overall processing times with StackSets.
Additionally, you can avoid the overhead of building logic to batch and queue operations submitted to StackSets.
With today’s release, AWS Amplify offers a new Authenticator UI component for web apps built with React, Angular, and Vue, giving developers the easiest way to add login experiences to their app with a single line of code. The new Authenticator UI component not only gives developers the quickest way to add user login and registration workflows to their apps, but also also gives developers complete control over modifying the layout and behavior to match any designs.
Amazon CloudWatch now supports anomaly detection based on metric math expressions. Amazon CloudWatch anomaly detection allows you to apply machine-learning algorithms to continuously analyze system and application metrics, determine a normal baseline, and surface anomalies with minimal user intervention.
CloudWatch metric math allows you to aggregate and transform metrics to create custom visualizations of your health and performance metrics.
Metric math supports basic arithmetic functions such as +,-,/,*, comparison and logical operators such as AND & OR, and a number of additional functions such as RATE and INSIGHT_RULE_METRIC.
For example, with AWS Lambda metrics you can divide the Errors metric by the Invocations metric to get an error rate, use anomaly detection to visualize expected values on a metric graph, and create an anomaly detection alarm to dynamically alert you when the value falls outside of the expected range.
Today, we're excited to announce that Amazon Athena supports AWS Glue Data Catalog partition indexes to optimize query planning and reduce query runtime. When you query a table containing a large number of partitions, Athena retrieves the available partitions from the AWS Glue Data Catalog and determines which are required by your query.
As new partitions are added, the time needed to retrieve the partitions increases and can cause query runtime to increase.
AWS Glue Data Catalog allows customers to create partition indexes which reduce the time required to retrieve and filter partition metadata on tables with tens and hundreds of thousands of partitions.
AWS Audit Manager now offers a dashboard to simplify your audit preparations with at-a-glance views of your evidence collection status per control. You can instantly track the progress of your audit assessments relative to common control domains. These control domains are general categories of controls, not specific to any one framework that allow customers to quickly assess status on common themes (E.g.- track overall issues in Identity and Compliance control domain).
My take: Yes please!
To help you quickly troubleshoot your permissions in Amazon Web Services (AWS), AWS Identity and Access Management (IAM) now includes the policy type that’s responsible for the denied permissions in access denied error messages. Amazon Sagemaker, AWS CodeCommit and AWS Secrets Manager are among the first AWS services that now offer this additional context, with other services following in the next few months. When you troubleshoot access-related challenges, the identified policy type in the access denied error message helps you to quickly identify the root cause and unblock your developers by updating relevant policies.
My take: I'm always a fan of price drops!
Starting November 9, 2021, Amazon Rekognition Image APIs pricing has been reduced by up to 38% in all 14 supported regions. This price reduction will automatically reflect in customer bills starting from November 2021.
Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now offers AWS Graviton2 general purpose - M6g instance family. Customers can enjoy up to 38% improvement in indexing throughput, 50% reduction in indexing latency, and 30% improvement in query performance when compared to the corresponding x86-based instances from the current generation M5.
My take: I'm all for anything that helps you get a handle on what's happening in your k8s environments.
Today, we are excited to announce the general availability of Amazon Elastic Kubernetes Service (EKS) Connector. With EKS Connector, you can now extend the EKS console to view your Kubernetes clusters outside of AWS. You can use the EKS console to visualize Kubernetes clusters including your on-premises Kubernetes clusters, self-managed clusters running on Amazon Elastic Compute Cloud (EC2), and clusters from other cloud providers.
Once connected, you can see all of your clusters’ statuses, configurations, and workloads in one place on the EKS console.
Starting today, AWS Network Firewall is compliant with the ISO 9001, ISO 27001, ISO 27017, ISO 27018 and ISO 27701 standards. AWS maintains certifications through extensive audits of its controls to ensure that information security risks that affect the confidentiality, integrity, and availability of company and customer information are appropriately managed.
The FindMatches ML transform in AWS Glue now includes an option to output match scores, which indicate how closely each grouping of records match each other. The FindMatches transform allows you to identify duplicate or matching records in your dataset, even when the records do not have a common unique identifier and no fields match exactly.
FindMatches helps automate complex data cleaning and deduplication tasks.
You can now easily setup workload specific monitoring and view the health of these workloads via Amazon CloudWatch Application Insights problems directly from the Amazon CloudWatch Container Insights console, making it easier to dive deep into issues, troubleshoot problems and reduce mean time to resolution.
My take: Woohoo!
Amazon Athena’s redesigned console is now generally available in all AWS commercial and GovCloud regions where Athena is available. The new and improved console brings a modern, more personalized experience to all of the features you enjoy in the current console and includes several new features which make analyzing data with Athena more powerful and productive.
Today, AWS Amplify announces a new “amplify add custom” command to add any of the 175+ AWS services to an Amplify-created backend using the AWS Cloud Development Kit (CDK) or AWS CloudFormation. The AWS Amplify CLI is a command line toolchain that helps frontend developers create app backends in the cloud. The new ability to add custom resources enables developers to add additional resources beyond Amplify’s built-in use cases with a single command.
My take: This will help with graph database adoption...not as much as a true serverless version would though.
AWS released an open source Java (JDBC) driver to connect to Amazon Neptune. This makes it easy for customers to connect to Neptune with tools and libraries that support JDBC, such as popular Business Intelligence (BI) tools.
My take: I gave this is a three because it's solid functionality. It should get a zero because it should have been there at launch.
We are excited to announce that AWS IoT Device Management resources are now supported on AWS CloudFormation. With a few clicks, you can now use a CloudFormation template to pre-configure and deploy IoT fleet management infrastructure like Job Templates, Fleet Metrics, and IoT Logging settings in a standardized and repeatable way across multiple regions and accounts.
My take: This makes the AWS Management Console search experience a lot more useful...still a ways to go though
We are excited to announce that blogs, knowledge articles, events, and tutorials are available in Unified Search to enable users to easily search and discover information in the AWS Management Console. AWS users can now search for blogs (e.g., Implementing Auto Scaling for EC2 Mac Instances), knowledge articles (e.g., Set Your Preferences for AWS Emails), tutorials (e.g., Remotely Run Commands on an EC2 Instance), and events (e.g., AWS Container Day) without leaving the AWS Management Console.
My take: This will make it easier to leverage Amazon Athena at scale.
If you have data in sources other than Amazon S3, you can use Amazon Athena federated query to analyze the data in-place or build pipelines that extract and store data in Amazon S3. Until today, querying this data required the data source and its connector to use the same AWS account as the user querying the data. Athena now supports cross-account federated query to enable teams of analysts, data scientists, and data engineers to query data stored in other AWS accounts.
My take: We all win when more things get added to AWS Security Hub.
AWS Security Hub has released three new controls for its Foundational Security Best Practice standard (FSBP) to enhance customers’ Cloud Security Posture Management (CSPM). These controls conduct fully-automatic checks against security best practices for Elastic Load Balancing and AWS Systems Manager. If you have Security Hub set to automatically enable new controls and are already using AWS Foundational Security Best Practices, these controls are enabled by default.
Security Hub now supports 162 security controls to automatically check your security posture in AWS.
My take: Very handy to spot operational and security issues.
AWS CloudTrail announces CloudTrail error rate Insights, a new feature of CloudTrail Insights that enables customers to identify unusual activity in their AWS account based on API error codes and their rate.
My take: Nice to see more and more instance types powered by AWS Graviton2. These CPUs are fantastic.
Starting today, general-purpose Amazon EC2 M6gd instances are now available in Asia Pacific (Mumbai), and Europe (London). The compute-optimized Amazon EC2 C6gd instances are now available in Asia Pacific (Mumbai), Canada (Central), and Europe (London).
AWS Control Tower now supports concurrent operations for detective guardrails to help expedite guardrail management. You can now enable multiple detective guardrails without needing to wait for individual guardrail operations to complete. AWS Control Tower provides customers with out-of-the-box preventive and detective guardrails that you can deploy to increase your security, operational, and compliance posture.
My take: A new service with a lot of promise. This one aims to gather all of the health metrics in one place and trigger actions based on the overview of your systems.
Amazon Web Services (AWS) has announced the general availability of AWS Resilience Hub, a new service that provides you with a single place to define, validate, and track the resilience of your applications so that you can avoid unnecessary downtime caused by software, infrastructure, or operational disruptions.
Amazon Elastic Container Service (Amazon ECS) now provides customers enhanced visibility into the health of their compute infrastructure. Customers running containerized workloads using Amazon ECS on Amazon Elastic Compute Cloud (Amazon EC2) or on-premises with Amazon ECS Anywhere can now query the health status of the container runtime (i.e Docker) for their container instances directly from the Amazon ECS API.
This helps customers improve application resiliency.
My take: A very cool service gets a little bit better with this update.
You can now create and run AWS Fault Injection Simulator (FIS) experiments that check the state of Amazon CloudWatch alarms and run AWS Systems Manager (SSM) Automations. You can also now run new FIS experiment actions that inject I/O, network black hole, and packet loss faults into your Amazon EC2 instanes using pre-configured SSM Agent documents.
Because it can be difficult to predict how applications will respond to stress under real world conditions whether in testing or production environments, integrating alarm checks and automated runbooks into your FIS experiments can help you gain more confidence when injecting disruptive events such as network problems, instance termination, API throttling, or other failure conditions.
AWS Toolkits for JetBrains, VS Code and Cloud9 now provide customers with the ability to select and view from a list of 245 resource types across 94 services without leaving their IDEs. With this release, in addition to accessing AWS services that are listed by default in the AWS Explorer pane, customers can choose from hundreds of resources to interact with.
This feature uses the AWS Cloud Control API enabling the Toolkit to continually and rapidly add new resource types in the future.
My take: Fantastic update. Athena queries pricing can be tricky. This'll help shine a light on it when you're doing it...that's key.
Amazon Athena now displays the computational cost of your queries alongside their execution plans. With the release of the EXPLAIN ANALYZE statement, Athena can now execute your specified query and return a detailed breakdown of its execution plan along with the CPU usage of each stage and the number of rows processed.
My take: The more data points we can get into AWS Security Hub, the better.
AWS Security Hub now supports Amazon Virtual Private Cloud (VPC) endpoints via AWS PrivateLink so that you can securely initiate API calls to Security Hub from within your VPC without requiring those calls to traverse across the Internet. AWS PrivateLink support for Security Hub is now available in all AWS Regions where Security Hub is available. To try the new feature, you can go to the VPC console, API, or SDK to create a VPC endpoint for Security Hub in your VPC.
This creates an elastic network interface in your specified subnets.
The interface has a private IP address that serves as an entry point for traffic that is destined for Security Hub.
You can read more about Security Hub’s integration with PrivateLink here.
AWS Lake Formation now support managed VPC endpoints (powered by AWS PrivateLink) to access a data lake in a Virtual Private Cloud (VPC). With AWS Lake Formation-managed endpoints, you can now authorize access to the data lake for client applications and services inside of your VPC and on-premises using private IP connectivity. You can also configure VPC endpoint policies to have finer grained control over how services access AWS Lake Formation.
My take: Awesome! This is a critical security feature for CloudFront distributions.
Today, Amazon CloudFront is launching support for response headers policies. You can now add cross-origin resource sharing (CORS), security, and custom headers to HTTP responses returned by your CloudFront distributions. You no longer need to configure your origins or use custom Lambda@Edge or CloudFront functions to insert these headers.
AWS re-launches a dramatically improved Amazon Inspector, a software vulnerability discovery/management service.
The idea behind these types of services is to scan your servers and containers before they reach production to identify known vulnerabilities so that you are aware of them and can mitigate them appropriately.
Notice I said “mitigate” not “patch”. Patching is just one of many possible mitigations.
Amazon Inspector launched in 2015 and a lot has changed since then. This is a much needed upgrade to the service that should help builders identify these issues with the minimal amount of effort possible.
This 👇 Twitter thread highlights some of the details of the launch…

@marknca tweeted at 29-Nov-2021, 20:52
Tweet 2/11 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2021, 20:52
Tweet 3/11 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2021, 20:52
Tweet 4/11 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2021, 20:52
Tweet 5/11 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2021, 20:52
Tweet 6/11 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2021, 20:52
Tweet 7/11 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2021, 20:52
Tweet 8/11 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2021, 20:52
Tweet 9/11 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2021, 20:52
Tweet 10/11 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2021, 20:52
Tweet 11/11 👇 Next tweet 👆 Start
@marknca tweeted at 29-Nov-2021, 20:52
This guide will grow as we get closer to the show. Please check back regularly for updates! Ping me, if you spot a problem or if something is missing @marknca.
As always, the event site is the ultimate source of truth (though I try to provide more context here). The site is available at https://reinvent.awsevents.com
Heads up, AWS has made a change to the campus. Please read the updated “Hotels” section so you’re aware.
I’ve written this guide year after year and it’s an interesting way to track how AWS re:Invent has evolved over the years.
Here are the guides of AWS re:Invent past:
As you would expect, the show has grown in scale and ambition every year as a reflection of the growth of the AWS Cloud and the larger community.
Last year’s event was the outlier. With the global pandemic and widespread shutdowns, the event moved to a completely virtual format.
It was…different.
The session content was amazing but a lot of the things we all love about the event simply weren’t possible in a virtual version. That’s understandable, everyone has struggled with what a virtual event looks like compared to an in-person one.
The conflicted news? AWS re:Invent 2021 will be an in-person event with limited virtual aspects.
That means a very different feel to the show. Not only in size but also the demographics. Will there be a strong international attendance? What safety measures will be in place? What will be the requirements to attend?
The short and simple answer: no one is sure.
The show is months away and if you look back to even a month ago, things were very, very different. There are a lot of questions about this year’s AWS re:Invent.
I have a strong feeling that this guide will change even more than previous years…and that’s saying a lot.
AWS re:Invent is all about learning and having a shared experience. The key to that is making sure that environment is inviting, safe, and supportive.
The AWS Community Codes of Conduct applies to both the physical and digital space surrounding AWS re:Invent…and to all other AWS events.
I would say that it’s all common sense and following the golden rule but with tens of thousands of people from around the world sharing the experience for the week, it’s best to call this out explicitly.
READ THE CODE OF CONDUCT AND LIVE BY IT.
If you encounter an issue or problem that runs counter to the code or if you in any way feel unsafe, please reach out for help. Amazon Security can be reached at [email protected] for live events. In addition, any one of the on-site staff or security can help resolve any issues.
No system is perfect but this is a great start at creating a safe, inclusive community at this fantastic event.
I’m an AWS Community Hero, a long time AWS re:Invent attendee (all of them!) and a big fan of the show.
I teach a few courses here on the A Cloud Guru platform (AWS Business Essentials, Introduction to the AWS Marketplace, and Mastering the AWS Well-Architected Framework), tweet regularly, and generally nerd out about all things cloud and security.
I write this guide every year to help the community get the most out of the show. Also, if I’m being 💯 honest, it helps me keep track of everything that’s going on so I can get the most out of the show. 😉
AWS has updated the health measures they are taking to help reduce the overall risk to everyone attending AWS re:Invent 2021.
As with everything, these measures may change as we get closer to the show.
Please read the entire page on the official re:Invent site, but the key aspects are:
AWS re:Invent 2021 is now sold out. The cost was $1,799 USD and virtual access remains free.
If you were curious, registration officially opened on Tuesday, 15-Jun-2021.
In keeping with the process over the past couple of in-person shows, hotel rooms at discounted will be available as part of the registration flow and after you’re registered.
These rates are always very good and the blocks of rooms are at properties hosting the show. In past years, these room blocks went very, very quickly.
So quickly in fact, that AWS expanded the “campus” to include a number of hotels that weren’t hosting any of the event, just ones where discounts were made available.
This year’s campus is again spread out across a number of venues;
Some of the originally listed properties are no longer part of the campus. There will be no events hosted at these properties and they are no longer listed as a “sleeping room hotel”:
Venues included in campus but not hosting events;
We’ll get more information closer to the event (as always!) but it looks like most of the content is going to take place at the Venetian/Palazzo, Wynn/Encore, and Caesars Forum.
That’s a bit of a shift from previous years.
The ARIA has dropped its activities in favour of Caesars Forum. That actually makes a lot of sense. Now that AWS has adjusted and dropped the MGM Grand from the campus, most activities will be taking place near each other. The Venetian/Palazzo, Wynn/Encore, and Caesars Forum are the epicentre of AWS re:Invent 2021.
This should make moving between venues a lot easier during the day and give you more flexibility in where you decide to actually stay.
It looks like you’re going to be heading to one of these three hotels (and maybe all three) every day of the show.
As you can probably tell, I’m a big fan of this show. This year will be great but it won’t be the same. The world has changed.
If you aren’t comfortable or cannot attend in person, there is a limited virtual component. I’ll be honest, as an international attendee, I’m disappointed in this aspect of the show.
As it currently stands, the keynotes and leadership sessions will be live streamed. Breakout sessions will be available on-demand after the event.
Given the virtual infrastructure in place for AWS re:Invent 2020, I’m assuming that will all take place on that platform (vs. say, YouTube).
This approach isn’t really any different than previous in-person years. 😔
I’ve created a separate guide to attending AWS re:Invent virtually that might help if you’re going remote.
Season four of the show, “How to re:Invent” has already kicked off!
This year, Annie Hancock returns with a new co-host, Kelley Schultz.
This show brings you tips and tricks about the conference and gives you an inside look as we get closer to the show.
Stay tuned for each episode as they are always chock full of great info.
The episodes published so far;
AWS has again put a reserved seating system in place. That’s probably even more important this year with the health and safety measures put in place.
You can view the session catalog and reserve your seat by logging in.
As always, the earlier you do this the better. Reserved seating is open now, so don’t delay.
The good news is that this process—while sometimes frustrating—generally makes the week go much smoother. Also, keep checking back as AWS often added repeats for the most popular sessions.
…and keep in mind that all breakout sessions will be available on YouTube shortly after re:Invent. However, chalk talks, workshops, and some other session types won’t be recorded.
Keep that in mind as you build out your week.
The official agenda has been published. This gives us a glimpse at the structure of the show.
Things kick off on Sunday with the Midnight Madness party. Other than that, Sunday is just a day to get oriented, check-in to get your badge, and pick up your swag!
Monday is when things really start to roll. The certification exams start, labs open up, bootcamps kick off, and some of the breakout sessions start.
We also have the first keynote of the week. This one is at 3:00—5:00pm pacific time. After that keynote, the expo hall will host the now traditional Welcome Reception.
Tuesday is the first “standard” day of the show. Most activities are open and we have a keynote from 8:00—10:00am pacific.
In the evening, there are several restaurant receptions hosted by sponsors and a few “quirky activities” which promise to be a lot of fun.
Wednesday is almost a cookie cutter of Tuesday with the one exception being a second keynote that runs from 3:00—5:00pm pacific.
On Thursday, we’re back to the Tuesday schedule during the day. Only the one, morning keynote. We have the AWS Certification Reception in the early evening and the night caps off with the legendary re:Play party.
No idea how that’s going to feel this year given the circumstances but every other year, it’s been an absolutely blast.
re:Invent is still going strong on Friday. Maybe “strong” is not the right word, that always depends on how your re:Play party experience was. No keynotes on Friday but breakout sessions (often with repeats to the best of the week) run until 12:30pm when the conference officially ends.
AWS has also announced an expanded “leadership sessions” series. These were introduced a couple years back to serve as “mini-keynotes”.
I like the idea as the sessions are typically delivered by a senior AWS leader in that area.
Here are the 2021 leadership sessions;
These sessions all follow the same general format. They review the biggest announcements from the past year for the theme. Then make a few announcements of either new services or features either launching immediately or coming soon.
These announcements are ones that are not quite main keynote level but still deserving more attention than just a blog post or documentation update.
Navigating all of the different sessions can be overwhelming. This year, a few of the AWS Heroes have again come together to present various guides aligned with specific interests.
The guides are downloadable PDFs which is a bit of a change from previous years where they were built into the registration system and available as HTML.
The following guides are available;
In addition to the Hero guides, AWS has also pulled together guides for specific industries and a few tailored to APN Partner interests.
For now, it appears the The Quad has been removed from AWS re:Invent 2021. It’s still early (mid-September) but the following events are currently up in the air:
No word yet if they’ll find a new home as the AWS re:Invent 2021 campus has been reduced in size.
I’ll leave the description of these events here in case they do find a new home and come back to the schedule…
The DeepRacer league has grown year after year. It’s always a blast to watch or race in. You can get started by reviewing the documentation online or watching some of the great content here on the A Cloud Guru platform.
The Builder’s Fair is basically a big science fair. Groups from schools, companies, and other organizations show off their projects with hands-on demos and interactive discussions. It’s a very cool way to see some different use cases for AWS technologies and their real world applications.
AWS Connected World is new to AWS re:Invent. This space provides an area to highlight IoT, AI/ML, AR/VR, robotics, and more. AWS experts and partners will be on hand to show how they’ve applied these technologies to solve interesting problems.
The Demo Theatre in The Quad is where AWS Partners and experts can showcase short (usually 15m) demos of various services and solutions. The casual nature of the theatre makes it easy to ask questions and learn more about specific solutions.
Again, 👆 these events and the entire Quad have been removed from the official site alongside the reduction in campus size. Stay tuned for more.
For the past few years, re:Invent has kicked off with this great event. Styled after a US college sports pep rally, the party always hosts the first product launch of the week along with fun activities, food, music…and maybe, just maybe, another world record!
The legendary conference party is back again for its 10th iteration. Every year makes the last seem small. Expect more dodgeball, office chair races, and of course amazing musical performances.
With most of AWS re:Invent 2021 taking place physically, there are going to be a host of other social activities.
AWS has updated the “Quirky” page with some of the available activities.
This year there will be;
AWS does a great job of filling the calendar making sure that you can find something to suit. You definitely won’t be bored during the week!
Please check back regularly for updates! Ping me, if you spot a problem or if something is missing @marknca.
]]>Performance Efficiency is one of the five pillars of the AWS Well-Architected Framework. The framework describes the principles and techniques required to make informed trade-offs when you’re building in the AWS Cloud.
I’ve taught thousands of builders how to build better using the framework on the A Cloud Guru platform. Be sure to check out my course, “Mastering The Well-Architected Framework”
This 👇 Twitter thread dives deeper into the Performance Efficiency pillar of the framework…
@marknca tweeted at 26-Nov-2021, 13:05
Tweet 2/8 👇 Next tweet 👆 Start
@marknca tweeted at 26-Nov-2021, 13:05
Tweet 3/8 👇 Next tweet 👆 Start
@marknca tweeted at 26-Nov-2021, 13:05
Tweet 4/8 👇 Next tweet 👆 Start
@marknca tweeted at 26-Nov-2021, 13:05
Tweet 5/8 👇 Next tweet 👆 Start
@marknca tweeted at 26-Nov-2021, 13:05
Tweet 6/8 👇 Next tweet 👆 Start
@marknca tweeted at 26-Nov-2021, 13:05
Tweet 7/8 👇 Next tweet 👆 Start
@marknca tweeted at 26-Nov-2021, 13:05
Tweet 8/8 👇 Next tweet 👆 Start
@marknca tweeted at 26-Nov-2021, 13:05
AWS re:Invent is always THE event in cloud. This year a lot of people can’t attend in person. This post tells you how to get the most out of the virtual side of the event.
I’ve got a lot of content up around AWS re:Invent 2021. Be sure to browse the full list. If you’re attending in person, you want, “[The Ultimate Guide to AWS re:Invent 2021](/2021/the-ultimate-guide-to-aws-reinvent-2021”
Attending AWS re:Invent 2021 virtually is free. You just need to sign up and that will give you access to the event system.
…and yes, we now know that AWS is using a slightly modified version of their new event platform from AWS re:Invent 2020. That was a mixed bag last year but with the vast majority of the event happening in person, it should be a better experience this year.
If you haven’t already registered, make sure you take care of that soon. Remember that in-person attendance is now sold out.
The main focus of the platform is your agenda. This is essentially your event calendar.
There are two major problems with it;
It’s hard to find sessions and then to remember to attend them. Not a great start but it’s what we have to work with.
No matter, the content is divided into two main areas; live and on-demand.
It’s not mentioned but I suspect that the in-person sessions will be published here but I hope they end up on the AWS Events YouTube channel as it’s a lot easier to share content from there.
At the moment, there are currently 554 unique sessions listed as “on demand.” This sounds close total number of breakout sessions. The other session types; Chalk talks, workshops, and Builder’s Sessions aren’t typically recorded.
I’ve dumped the session catalog to make it a bit easier to search. You can use your browser’s find function to locate a session you’re interested in and then click through to that session on the re:Invent platform.
Not ideal, but simple and easy to work with…well easier than the platform itself.
The question that I can’t seem to find an answer to is when will the sessions become available on demand? When I find out, I’ll update this post.
The live schedule is as expected. The five keynotes and the leadership sessions will all be broadcast live based on Pacific Time. They will then be rebroadcast twice, once for Europe and again for Asia.
Additionally, we’ll get regular content on Twitch via the AWSonAir program. That’s going to be running 24/7 and will include these shows (with some repeats for the time zones);
More about these shows in the description on the event platform.
If you like “This is My Architecture”, I’ve been doing reaction videos of old episodes over on my YouTube channel.
I’ve added all of the live events to an accessible calendar. To make it easier to overlay the broadcasts with your actual calendar. Not perfect but better than having that data locked in the platform.
As is reasonable, things may change and the AWS re:Invent platform is the ultimate source of truth here. Make sure to double check the timing & schedule.
This is the first time that I’m not on site for a physical re:Invent. I have very mixed feelings about it. In order to get the most out of the virtual side of the event, I’m planning on the following;
What I’m missing out on is all of the personal connections. It’s been too long and sadly, it’ll be a little bit longer for me. I’m happy to connect virtually throughout the show, I’m sure there’s going to be a lot to talk about.
]]>Reliability is one of the five pillars of the AWS Well-Architected Framework. The framework describes the principles and techniques required to make informed trade-offs when you’re building in the AWS Cloud.
I’ve taught thousands of builders how to build better using the framework on the A Cloud Guru platform. Be sure to check out my course, “Mastering The Well-Architected Framework”
This 👇 Twitter thread dives deeper into the Reliability pillar of the framework…
@marknca tweeted at 25-Nov-2021, 13:01
Tweet 2/8 👇 Next tweet 👆 Start
@marknca tweeted at 25-Nov-2021, 13:01
Tweet 3/8 👇 Next tweet 👆 Start
@marknca tweeted at 25-Nov-2021, 13:01
Tweet 4/8 👇 Next tweet 👆 Start
@marknca tweeted at 25-Nov-2021, 13:01
Tweet 5/8 👇 Next tweet 👆 Start
@marknca tweeted at 25-Nov-2021, 13:01
Tweet 6/8 👇 Next tweet 👆 Start
@marknca tweeted at 25-Nov-2021, 13:01
Tweet 7/8 👇 Next tweet 👆 Start
@marknca tweeted at 25-Nov-2021, 13:01
Tweet 8/8 👇 Next tweet 👆 Start
@marknca tweeted at 25-Nov-2021, 13:01
This talk was presented at SPIE on 25-Nov-2021.
When a business moves to the cloud, there are six major strategies they use for each IT service. Whether they move a system “as-is” or do a complete re-architecture, each approach has specific advantages and disadvantages.
When a security practice moves to the cloud, it’s almost always using the same “as-is” strategy. Why?
In this talk, we’ll examine that predominant strategy and explore its impact. We’ll also take a look at what we could—and should—be doing in cloud environments to move our cloud security practices forward.
Can security get the same benefits from the cloud as the rest of the business? Let’s discuss the challenge together!
Business use any combination of these six strategies to migrate to the cloud.
...but security typically only uses one, "Retain / revist"
Why?
Security really should be focusing on improving their practice by leveraging the three options that lead to a cloud native practice; re-platform, re-host, refactor/re-architect.
Remember, throughout this process the principles of security don't change. But we do need to change the way we—the security community—work.
Let's accept for a minute our current reality in the security community.
We're constantly fighting fires. There's no time to work on anything forward looking.
Even if we did have the time, we're not involved in the rest of the business at the levels we should be.
...and we simply don't have the resources to do the job we're tasked with.
😔
Stop saying no. Stop moving slow. Stop adding weighty processed to everything.
It should be obvious, but don't fork lift you're current practice into the cloud.
That way madness lies...
Instead, we need to focus on the Shared Responsibility Model, automating everything, and delegating security responsibilities to other teams who are better positioned to meet our goals.
This model dictates how all operational and security activities work. It shows whether you—the builder—or the Cloud Service Provider (CSP) is responsible for a specific area of the system.
We start with on-premises, where you are responsible for everything. This is the traditional working model but it still lines up with this concept. You were sharing responsibilities. Just with different teams, not external partners.
Moving into the cloud, you immediate delegate 1/2 of the work to the CSP. That only increases as you move towards SaaS-type services.
No matter what, you are responsible for your data and configuring the CSPs service. Those are always your responsibilities.
The business advantages are clear. The more your delegate to your CSP, the more you can focus on providing direct business value.
This means that you should bias towards SaaS-type or managed services whenever possible.
The good news? Security responsibilities follow suit.
People often remark that it's hard to figure out where your responsibilities lie. It turns out, it's actually pretty simple.
You need to verify if you're expected to manage the operating system and the application layers. That's it.
This means your areas of security focus should be...
We make a lot of assumptions on-premises about our security practice. In fairness, they're usually true.
...these don't hold up anymore.
Permissions change significantly in the cloud. Some key areas;
The lifecycle of systems in the cloud is unrecognizable compared to on-premises. You should be looking in to...
The IP is the gold standard for system identity on-premises. That's out the window in the cloud.
If you haven't figured it out yet, the overall environment in the cloud is drastically different from on-premises...and that's a good thing.
Again, we deal with a set of assumptions from our on-premises environments.
Gone are the days of manual change process. Automation is key to success here.
Resource lifecycle is very dynamic in the cloud. The effort to get something in production is so low, that it happens all of the time.
We've touched on this a few times already, the rate of change is exponentially faster than on-premises.
The assumptions built up on-premises have helped security operate. Again, these were made through a series of logical steps but the result doesn't make sense in today's reality.
Gating is a useful tool for security teams. It doesn't go away in the cloud but it does change significant. Gates should now be full automated and transparent to other teams involved with the systems.
As with everything else, the infrastructure is very different in the cloud.
No security team wants to work in isolation. However, the lack of time, constant firefighting, and other constants make it really hard to work together effectively.
👆 all of these things make it clear that if the security community doesn't change, it'll be passed by. No one wants that.
Step 1.
This step is all about coming up to speed on what the is and what it means for the business. Take off your security hat and just learn.
Step 2.
This step gets you "cloud-y". Becoming comfortable with the core drivers of cloud is critical to security success.
Step 3.
With a strong foundation underneath you, it's time to branch out. Working in a modern way with other teams throughout the business.
Remember the key is small steps over and over again. With each one, make sure you are learning and getting a little bit better!
Thank you!
Cost Optimization is one of the five pillars of the AWS Well-Architected Framework. The framework describes the principles and techniques required to make informed trade-offs when you’re building in the AWS Cloud.
I’ve taught thousands of builders how to build better using the framework on the A Cloud Guru platform. Be sure to check out my course, “Mastering The Well-Architected Framework”
This 👇 Twitter thread dives deeper into the Cost Optimization pillar of the framework…
@marknca tweeted at 24-Nov-2021, 13:03
Tweet 2/8 👇 Next tweet 👆 Start
@marknca tweeted at 24-Nov-2021, 13:04
Tweet 3/8 👇 Next tweet 👆 Start
@marknca tweeted at 24-Nov-2021, 13:04
Tweet 4/8 👇 Next tweet 👆 Start
@marknca tweeted at 24-Nov-2021, 13:04
Tweet 5/8 👇 Next tweet 👆 Start
@marknca tweeted at 24-Nov-2021, 13:04
Tweet 6/8 👇 Next tweet 👆 Start
@marknca tweeted at 24-Nov-2021, 13:04
Tweet 7/8 👇 Next tweet 👆 Start
@marknca tweeted at 24-Nov-2021, 13:04
Tweet 8/8 👇 Next tweet 👆 Start
@marknca tweeted at 24-Nov-2021, 13:04
This talk was presented at TASK on 24-Nov-2021.
While the rest of the business has jumped into the push towards cloud, how should your security practice adjust? Architectures, visibility requirements, and data protection needs, among others, are different in the cloud.
It can be hard to know where to focus. How can you identify and manage different risks and exposures? There are so many changes, what steps should you take?
In this session, we’ll look at different areas of your security practice, how they shift, and how to prioritize them as your organization moves to the cloud.
The goal is to provide a map of your next steps and to highlight what resources can help you not just move your practice to the cloud but improve it at the same time.
We're beyond this now, right?
Let's talk about how we should be working in the cloud.
We—the security community—haven't advanced our work methodology as much as we should because it's a pain in the you-know-what on-premises.
On prem, there are just too many different ways of interfacing with systems. This drives up the cost of automation.
In the cloud, this is simplified greatly. The CSP provides a unified interface through which you can interact with almost all of your stack. That opens up a lot of new possibilities.
Let's take a look at the...
...and come up with a plan.
This model dictates how all operational and security activities work. It shows whether you—the builder—or the Cloud Service Provider (CSP) is responsible for a specific area of the system.
We start with on-premises, where you are responsible for everything. This is the traditional working model but it still lines up with this concept. You were sharing responsibilities. Just with different teams, not external partners.
Moving into the cloud, you immediate delegate 1/2 of the work to the CSP. That only increases as you move towards SaaS-type services.
No matter what, you are responsible for your data and configuring the CSPs service. Those are always your responsibilities.
The business advantages are clear. The more your delegate to your CSP, the more you can focus on providing direct business value.
This means that you should bias towards SaaS-type or managed services whenever possible.
The good news? Security responsibilities follow suit.
People often remark that it's hard to figure out where your responsibilities lie. It turns out, it's actually pretty simple.
You need to verify if you're expected to manage the operating system and the application layers. That's it.
This provides clear areas of security focus...
You are already familiar with securing your data, operating systems, and applications. Nothing changes at these layers in the cloud.
So, for this talk, we'll focus on the other areas...
Remember the principles of security don't change...but the way we work needs to.
There are four main tenets...
A feedback loop is simply following these steps;
Being part of a larger team is hard.
Security teams are typically firefighting constantly. That makes it very difficult to coordinate and collaborate with the rest of the business.
Add to that the chronic challenge of finding enough resources and it's completely understandable why security teams operate the way they do.
But change need to start somewhere. This is a big one and a perfect example of the cliche, short term pain for long term gain.
"as-Code" is a popular buzzword. Thankfully, there's real value behind the hype.
Everything should be code in the cloud. From infrastructure definitions, to security controls, to automations, to...um...code.
Having source or a template or a script that can (re)create whatever is needed is an amazing ability.
It also means we can track changes over time, analyze before running, and programmatically interact with everything in our environment.
DevOps is successful because of their constant put for automation. Security needs to be the same.
Automation is more reliable, faster, and helps take the pressure off of your team.
Opportunities;
This new feature from AWS was just released and it lets you remove a Windows Server remote access tool from your stack.
Instead of exposing RDP access in your design. This feature automatically manages the access securely for you.
Challenges;
This same AWS remote access featureonly received a couple of paragraphs in the "What's New" stream and a documentation update.
...and this is just one of hundreds of features that get released every year.
Leading up to AWS re:Invent 2021, AWS has already release 215 new features
Misconfiguration of CSP services is the #1 security issue in the cloud right now.
All of the cloud-specific breaches in the past few years have been a result of misconfigurations.
In fact, the few security issues reported by the CSPs themselves have also been misconfigurations!
Plan;
Opportunities;
Google Cloud Armor is a good example of a boundary control.
It provides DDoS, WAF, and IP access control capabilities for any workload behind it. Whether that's virtual machines, containers, functions or something else.
Challenges;
...back to Google Cloud Armor. This service can protect Cloud Functions but if a user or actor knows the direct URL for the function, the request won't pass through Google Cloud Armor.
While still incredibly useful, this control doesn't completely cover the service boundary.
Risk;
Plan;
Opportunities;
Challenges;
Risk;
Plan;
Step 1;
Step 2;
Step 3;
Step 4;
Step 5;
Step 6;
Thank you!
Security is one of the five pillars of the AWS Well-Architected Framework. The framework describes the principles and techniques required to make informed trade-offs when you’re building in the AWS Cloud.
I’ve taught thousands of builders how to build better using the framework on the A Cloud Guru platform. Be sure to check out my course, “Mastering The Well-Architected Framework”
This 👇 Twitter thread dives deeper into the Security pillar of the framework…
@marknca tweeted at 23-Nov-2021, 20:21
Tweet 2/12 👇 Next tweet 👆 Start
@marknca tweeted at 23-Nov-2021, 20:21
Tweet 3/12 👇 Next tweet 👆 Start
@marknca tweeted at 23-Nov-2021, 20:21
Tweet 4/12 👇 Next tweet 👆 Start
@marknca tweeted at 23-Nov-2021, 20:21
Tweet 5/12 👇 Next tweet 👆 Start
@marknca tweeted at 23-Nov-2021, 20:21
Tweet 6/12 👇 Next tweet 👆 Start
@marknca tweeted at 23-Nov-2021, 20:21
Tweet 7/12 👇 Next tweet 👆 Start
@marknca tweeted at 23-Nov-2021, 20:21
Tweet 8/12 👇 Next tweet 👆 Start
@marknca tweeted at 23-Nov-2021, 20:21
Tweet 9/12 👇 Next tweet 👆 Start
@marknca tweeted at 23-Nov-2021, 20:21
Tweet 10/12 👇 Next tweet 👆 Start
@marknca tweeted at 23-Nov-2021, 20:21
Tweet 11/12 👇 Next tweet 👆 Start
@marknca tweeted at 23-Nov-2021, 20:21
Tweet 12/12 👇 Next tweet 👆 Start
@marknca tweeted at 23-Nov-2021, 20:21
In late 2017, Airbnb did an AWS “This is My Architecture” video. The video talks about how they built a system to test their Amazon RDS databases.
Now, a few years later, I react to that video and see what’s stood the test of time, what could be done simpler given today’s technology, and generally critique the design against the AWS Well-Architected Framework.
The AWS Well-Architected Framework is designed to help you and your team make informed trade offs while building in the AWS Cloud. It’s built on five pillars;
There pillars cover the primary concerns of building and running any solution. And as much as we’d all love to have everything, that’s just not possible.
…enter the framework.
It’ll help you strike the right balance for your goals to make sure that your build is the best it can be now and moving forward.
I often get asked why I talk about building in the cloud and architectural choices so often…aren’t I a security person?
Yes, I do focus on security and architecture is a critical part of that.
There’s really two types of security design work. The first is when you’re handed something and need to make sure the risks of that technology matches the risk appetite of the users.
The second type is when you’re building the technology. This is where making choices informed by security early in the process can have profound effects. You’re no longer bolting security on but building it in by design.
That’s why I talk about architecture and building so much. It’s where we all can have the largest possible security impact!
This video—and the ones that will come after—looks at a specific set of design decisions and how they balance the concerns of the AWS Well-Architected Framework…where security is one of the five pillars.
At the time, Airbnb was leveraging Amazon RDS (and still may be). They set out to build a way to test their databases but not just with the queries and load they were currently seeing, but with what could happen as well.
Simply replaying the logs wouldn’t work. They designed a simple system to capture and protect production queries and data. This system allowed them to replay and expand on those queries to simulate what would happen if these new types of loads hit their production systems.
Learn more in the reaction video 👆.
Btw, I’ve updated my course, “Mastering The AWS Well-Architected Framework” on A Cloud Guru. If you want a solid walk through of the ideas behind the framework and how to apply it to your work in the AWS Cloud, check it out!]]>
Operational Excellence is one of the five pillars of the AWS Well-Architected Framework. The framework describes the principles and techniques required to make informed trade-offs when you’re building in the AWS Cloud.
I’ve taught thousands of builders how to build better using the framework on the A Cloud Guru platform. Be sure to check out my course, “Mastering The Well-Architected Framework”
This 👇 Twitter thread dives deeper into the Operational Excellence pillar of the framework…
@marknca tweeted at 22-Nov-2021, 17:33
Tweet 2/13 👇 Next tweet 👆 Start
@marknca tweeted at 22-Nov-2021, 17:33
Tweet 3/13 👇 Next tweet 👆 Start
@marknca tweeted at 22-Nov-2021, 17:33
Tweet 4/13 👇 Next tweet 👆 Start
@marknca tweeted at 22-Nov-2021, 17:33
Tweet 5/13 👇 Next tweet 👆 Start
@marknca tweeted at 22-Nov-2021, 17:33
Tweet 6/13 👇 Next tweet 👆 Start
@marknca tweeted at 22-Nov-2021, 17:33
Tweet 7/13 👇 Next tweet 👆 Start
@marknca tweeted at 22-Nov-2021, 17:33
Tweet 8/13 👇 Next tweet 👆 Start
@marknca tweeted at 22-Nov-2021, 17:33
Tweet 9/13 👇 Next tweet 👆 Start
@marknca tweeted at 22-Nov-2021, 17:33
Tweet 10/13 👇 Next tweet 👆 Start
@marknca tweeted at 22-Nov-2021, 17:33
Tweet 11/13 👇 Next tweet 👆 Start
@marknca tweeted at 22-Nov-2021, 17:33
Tweet 12/13 👇 Next tweet 👆 Start
@marknca tweeted at 22-Nov-2021, 17:33
Tweet 13/13 👇 Next tweet 👆 Start
@marknca tweeted at 22-Nov-2021, 17:33
AWS Labs has a lot of open source code up on GitHub. In this post, we’re taking a look at AWS Data Wrangler. This project provides a smoother interface between pandas DataFrames and various AWS Cloud data services.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 19-Nov-2021, 18:05
Tweet 2/11 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 18:05
Tweet 3/11 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 18:05
Tweet 4/11 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 18:05
Tweet 5/11 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 18:05
Tweet 6/11 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 18:05
Tweet 7/11 👇 Next tweet 👆 Start

@marknca tweeted at 19-Nov-2021, 18:05
Tweet 8/11 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 18:05
Tweet 9/11 👇 Next tweet 👆 Start

@marknca tweeted at 19-Nov-2021, 18:05
Tweet 10/11 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 18:05
Tweet 11/11 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 18:05
AWS Labs has a lot of open source code up on GitHub. In this post, we’re taking a look at AWS Data Wrangler. This project provides a smoother interface between python pandas DataFrames and various AWS Cloud data services.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 19-Nov-2021, 13:02
Tweet 2/8 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 13:02
Tweet 3/8 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 13:02
Tweet 4/8 👇 Next tweet 👆 Start

@marknca tweeted at 19-Nov-2021, 13:02
Tweet 5/8 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 13:02
Tweet 6/8 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 13:02
Tweet 7/8 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 13:02
Tweet 8/8 👇 Next tweet 👆 Start
@marknca tweeted at 19-Nov-2021, 13:02
AWS Labs has a lot of open source code up on GitHub. Amazon S3 Glacier Re:Freezer is one of those projects.
It’s got a simple premise: get the data out of Amazon S3 Glacier and back into Amazon S3.
That sounds simple but the solution is surprisingly nuanced. I call out a few more details in the Twitter thread below…
@marknca tweeted at 18-Nov-2021, 13:07
Tweet 2/10 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 13:07
Tweet 3/10 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 13:07
Tweet 4/10 👇 Next tweet 👆 Start

@marknca tweeted at 18-Nov-2021, 13:07
Tweet 5/10 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 13:07
Tweet 6/10 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 13:07
Tweet 7/10 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 13:07
Tweet 8/10 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 13:07
Tweet 9/10 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 13:07
Tweet 10/10 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 13:07
AWS Labs has a lot of open source code up on GitHub. This project, “Amazon API Gateway Serverless Developer Portal” builds a slick web interface for your Amazon API Gateway APIs.
Think of it like automatic documentation. It’s a great way to help developers get up to speed with your APIs.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 18-Nov-2021, 18:02
Tweet 2/9 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 18:02
Tweet 3/9 👇 Next tweet 👆 Start


@marknca tweeted at 18-Nov-2021, 18:02
Tweet 4/9 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 18:02
Tweet 5/9 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 18:02
Tweet 6/9 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 18:02
Tweet 7/9 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 18:02
Tweet 8/9 👇 Next tweet 👆 Start

@marknca tweeted at 18-Nov-2021, 18:02
Tweet 9/9 👇 Next tweet 👆 Start
@marknca tweeted at 18-Nov-2021, 18:02
AWS Labs has a lot of open source code up on GitHub. This repo helps you visualize your AWS workloads and highlight areas of cost.
AWS Perspective builds out a simple-ish solution that creates architecture diagrams of your current workloads.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 17-Nov-2021, 18:03
Tweet 2/9 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 18:03
Tweet 3/9 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 18:03
Tweet 4/9 👇 Next tweet 👆 Start
Tweet 5/9 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 18:03
Tweet 6/9 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 18:03
Tweet 7/9 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 18:03
Tweet 8/9 👇 Next tweet 👆 Start

@marknca tweeted at 17-Nov-2021, 18:03
Tweet 9/9 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 18:03
AWS Labs has a lot of open source code up on GitHub. The code in this repo lets your embed Amazon QuickSight dashboards in your HTML.
I’m a big fan of Amazon QuickSight. I think it solves a lot of business problems quickly and easily for customers. The service’s biggest weakness is that no one really knows about it or it’s functionality.
This repo, Amazon QuickSight Embedding SDK, lets you embed QuickSight dashboards in your HTML. I think that goes a long way to help make the insights this service can generate a lot more accessible.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 17-Nov-2021, 13:03
Tweet 2/8 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 13:03
Tweet 3/8 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 13:03
Tweet 4/8 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 13:03
Tweet 5/8 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 13:03
Tweet 6/8 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 13:03
Tweet 7/8 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 13:03
Tweet 8/8 👇 Next tweet 👆 Start
@marknca tweeted at 17-Nov-2021, 13:03
AWS Labs has a lot of open source code up on GitHub. Gluon TS helps you build and train models for forecasting time series data. It’s built on the Apache MXNet project and is can help with a lot of the heavy lifting for this use case.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 16-Nov-2021, 13:06
@marknca tweeted at 16-Nov-2021, 13:06
Tweet 2/6 👇 Next tweet 👆 Start
@marknca tweeted at 16-Nov-2021, 13:06
Tweet 3/6 👇 Next tweet 👆 Start
@marknca tweeted at 16-Nov-2021, 13:06
Tweet 4/6 👇 Next tweet 👆 Start

@marknca tweeted at 16-Nov-2021, 13:06
Tweet 5/6 👇 Next tweet 👆 Start
@marknca tweeted at 16-Nov-2021, 13:06
Tweet 6/6 👇 Next tweet 👆 Start
@marknca tweeted at 16-Nov-2021, 13:06
AWS Labs has a lot of open source code up on GitHub. This repo contains a solid set of AWS WAF rules for common web-based attacks.
AWS WAF Security Automations creates some simple rules for common attacks that really should be part of the default offering for AWS WAF. But they aren’t, which makes this repo extremely useful.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 16-Nov-2021, 19:33
Tweet 2/9 👇 Next tweet 👆 Start
@marknca tweeted at 16-Nov-2021, 19:33
Tweet 3/9 👇 Next tweet 👆 Start
@marknca tweeted at 16-Nov-2021, 19:33
Tweet 4/9 👇 Next tweet 👆 Start
@marknca tweeted at 16-Nov-2021, 19:33
Tweet 5/9 👇 Next tweet 👆 Start
@marknca tweeted at 16-Nov-2021, 19:33
Tweet 6/9 👇 Next tweet 👆 Start
@marknca tweeted at 16-Nov-2021, 19:33
Tweet 7/9 👇 Next tweet 👆 Start
@marknca tweeted at 16-Nov-2021, 19:33
Tweet 8/9 👇 Next tweet 👆 Start
@marknca tweeted at 16-Nov-2021, 19:33
Tweet 9/9 👇 Next tweet 👆 Start
@marknca tweeted at 16-Nov-2021, 19:33
In the past few days, we’ve seen two great essays around the topic of serverless. Both written by people that I respect and admire, these essays got me thinking.
The first, “The Unfulfilled Promise of Serverless” by Corey Quinn. The second is a rebuttal from Jeremy Daly, “The Unfulfilled Potential of Serverless".
You should read both essays in full, but I’ll give you the gist here.
Corey’s argument is that serverless compute was supposed to drive change in how we build solutions. It’s failed to live up to that. He cites the lack of portability and the low cost leading to a perception of low value as reasons. He calls out the lack of widespread adoption. That makes it harder to find talent comfortable with these solutions.
These are all great points and I agree with Corey.
Jeremy’s point is that it is early for serverless as concept. If we get out of the weeds, there’s a lot of value here. The tooling and ecosystem around the concept isn’t where it needs. That’s what is driving Corey’s argument. As things progress, we should see these issues recede, realizing the potential of serverless.
These are great points and I also agree with Jeremy.
But I think both essays only touch on a little of what’s going on here.
Fifteen years into this “cloud” thing and AWS itself provides a good example of what’s happening.
When AWS expanded beyond just a couple of services, it reached a point where it’s potential was revealed. This wasn’t just an easier to use data centre. Building in the cloud let businesses shift away from a mountain of undifferentiated IT work.
The hardware and software upgrades. Provisioning new capacity. Operating lower level systems that users never see. And more. IT is very much an iceberg. The cloud teased that you could hand that work off and focus on solving business problems.
Some of that potential has been realized. However, a lot of business saw the cloud as better outsourcing. That was their existing mental model. They wanted to put the new technology into the old model. It was comfortable to do so.
The APIs and clear(ish) pricing from AWS was a smoother way of doing business vs. the traditional outsourcing contracts that plagued IT of the 1990s. That’s what was driving a lot of “the cloud is cheaper” myths.
It took years to break from that mental model. Eventually we realized that the cloud was different. It wasn’t about a better outsourcing model.
It was—and is—about a cloud service provider operating at a scale where the economics are different. This economy of scale allows them to create new services and technologies. Most important, 99.9% of businesses will never reach that scale.
That scale provides an opportunity for the builders using the cloud.
Builders have access to technologies that would’ve taken years of steady growth to reach. Who would’ve been able to build out a storage service that had 11 9’s of durability? A queuing service handling millions of messages per minute?
Now, it is a simple API call and only costs pennies per hour.
This created new opportunities. The AWS Cloud was an exciting place to build for first few years of mainstream cloud acceptance.
It struck the right balance between power and simplicity. The tooling was mature enough. New features and services came out at a steady pace. With minimal effort, you could create very innovative solutions.
That’s still possible today. But builders are now dealing with overwhelming complexity. Just look at the list of AWS Cloud services in the AWS Management Console. How does anyone know where to get started?
In this confusion, builders default back to what they know.
This is why containers have skyrocketed in popularity. Especially compared to serverless designs over the past three years.
I see a lot of container-based solutions that would be better as serverless designs. Better in that they would be more efficient, less costly, and scale easier.
Why do these container-based solutions keep popping up?
Containers hit the sweet spot. They are familiar enough but push the envelope in interesting ways. They allow builders to be more productive using modern development methods. At the same time, they don’t require a new mental model.
What both Jeremy and Corey hint at is the people problem. The technology and concept around serverless designs makes more sense for most problems. People just aren’t ready for it yet.
Corey is right, serverless is an unfulfilled promise. Jeremy is also right, hang tight…it’ll happen.
The critical question is when? I don’t think it will happen for a while yet. Businesses will have to start to feel the pain of running Kubernetes before serverless comes to the forefront.
Before you fire up the Twitter cannon. Let’s take an objective look at Kubernetes. You cannot deny that it is a massive piece of critical infrastructure. It’s a burden for organizations adopting it. Despite the push away from running undifferentiated systems, k8s continues to rise.
Why? It lets team maximize the value their containers. Kubernetes allows containers to behave in modern ways. Solutions built on top of k8s are far closer to a serverless design than a traditional server-based ones.
That’s a good thing.
Yes, there’s a significant complexity cost associated with k8s. But it’s one that teams are used to paying. After all, is running k8s all that different than running your own application servers? Or mail servers? Or any of the other underlying infrastructure that we’ve been running for decades.
They are all critical to making other things run but your customers don’t get exposed to them. Conceptually, they operate at the same level. You teams and organization are ready for this effort. Doing this work is easier than changing the way everyone thinks.
Don’t get me wrong, this isn’t bad. This is normal. And it’s a good thing. Overall, these efforts move everyone closer to the ideas inherent in serverless designs.
This is a pattern that plays out time and time again. Gartner covers the technical side of it with their hype cycle approach. But, at it’s core this problem is a people challenge.
For the cutting edge, the devotees, the benefits of the new technology are clear. The shift to a new mental model is an easy one. But that’s a small percentage of the overall IT community.
The majority of the IT community is pushing towards a container driven landscape. And they will be there for a few years. Over time that will become too complex and burdensome. Then the mental model of serverless will become the dominant model.
It’ll take some time, but be patient. We’ve seen the pattern time and time again.
While the serverless community waits, innovation will continue. The ecosystem will mature. And when the rest of the IT community is ready, the technology will be too.
…and then we can start the loop all over again with whatever is next.
]]>In late 2017, Station X did an AWS “This is My Architecture” video. The video talks about how they built out a genomics processing pipeline on AWS.
Now, a few years later, I react to that video and see what’s stood the test of time, what could be done simpler given today’s technology, and generally critique the design against the AWS Well-Architected Framework.
The AWS Well-Architected Framework is designed to help you and your team make informed trade offs while building in the AWS Cloud. It’s built on five pillars;
There pillars cover the primary concerns of building and running any solution. And as much as we’d all love to have everything, that’s just not possible.
…enter the framework.
It’ll help you strike the right balance for your goals to make sure that your build is the best it can be now and moving forward.
I often get asked why I talk about building in the cloud and architectural choices so often…aren’t I a security person?
Yes, I do focus on security and architecture is a critical part of that.
There’s really two types of security design work. The first is when you’re handed something and need to make sure the risks of that technology matches the risk appetite of the users.
The second type is when you’re building the technology. This is where making choices informed by security early in the process can have profound effects. You’re no longer bolting security on but building it in by design.
That’s why I talk about architecture and building so much. It’s where we all can have the largest possible security impact!
This video—and the ones that will come after—looks at a specific set of design decisions and how they balance the concerns of the AWS Well-Architected Framework…where security is one of the five pillars.
Station X (now defunct) built out a very simple (in concept) data pipeline to process genomics data. The customer’s sequencing equipment uploads the data directly to S3. A fleet of EC2 instances then cleans and enriches that data and re-formats it to optimize for analysis.
That analysis happens via a managed service for Hadoop. Station X then built a custom analysis front end running in another fleet of EC2 instances.
Learn more in the reaction video 👆.
Btw, I’ve updated my course, “Mastering The AWS Well-Architected Framework” on A Cloud Guru. If you want a solid walk through of the ideas behind the framework and how to apply it to your work in the AWS Cloud, check it out!]]>
AWS Labs has a lot of open source code up on GitHub. This post looks at the AWS Lambda Powertools for Python. This project aims to help you get the most out of your AWS Lambda functions.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 15-Nov-2021, 18:02
Tweet 2/7 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 18:02
Tweet 3/7 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 18:02
Tweet 4/7 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 18:02
Tweet 5/7 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 18:02
Tweet 6/7 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 18:02
Tweet 7/7 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 18:02
AWS Labs has a lot of open source code up on GitHub. This post looks at the AWS Deployment Framework, a project designed to help you deploy across accounts in an AWS Organizations environment.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 15-Nov-2021, 13:06
Tweet 2/9 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 13:06
Tweet 3/9 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 13:06
Tweet 4/9 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 13:06
Tweet 5/9 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 13:06
Tweet 6/9 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 13:06
Tweet 7/9 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 13:06
Tweet 8/9 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 13:06
Tweet 9/9 👇 Next tweet 👆 Start
@marknca tweeted at 15-Nov-2021, 13:06
The Amazon Builder’s Library is a great set of deep dive papers into the challenges with modern systems. This post looks at how Amazon balances system stability between control and data plane requests.
“Avoiding overload in distributed systems by putting the smaller service in control, looks at
I call out a few more details in the Twitter thread below…
@marknca tweeted at 12-Nov-2021, 13:34
Tweet 2/10 👇 Next tweet 👆 Start
@marknca tweeted at 12-Nov-2021, 13:34
Tweet 3/10 👇 Next tweet 👆 Start
@marknca tweeted at 12-Nov-2021, 13:34
Tweet 4/10 👇 Next tweet 👆 Start
@marknca tweeted at 12-Nov-2021, 13:34
Tweet 5/10 👇 Next tweet 👆 Start
@marknca tweeted at 12-Nov-2021, 13:34
Tweet 6/10 👇 Next tweet 👆 Start
@marknca tweeted at 12-Nov-2021, 13:34
Tweet 7/10 👇 Next tweet 👆 Start
@marknca tweeted at 12-Nov-2021, 13:34
Tweet 8/10 👇 Next tweet 👆 Start
@marknca tweeted at 12-Nov-2021, 13:34
Tweet 9/10 👇 Next tweet 👆 Start
@marknca tweeted at 12-Nov-2021, 13:34
Tweet 10/10 👇 Next tweet 👆 Start
@marknca tweeted at 12-Nov-2021, 13:34
The Amazon Builder’s Library is a great set of deep dive papers into the challenges with modern systems. This post looks at how Amazon conducts hands-off deployments.
“Automating safe, hands-off deployments, takes a look at how Amazon does continuous deployment. Amazon has been doing this practice for a long while and has automated each process to the point where it’s hands-off.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 11-Nov-2021, 21:12
Tweet 2/10 👇 Next tweet 👆 Start
@marknca tweeted at 11-Nov-2021, 21:12
Tweet 3/10 👇 Next tweet 👆 Start
@marknca tweeted at 11-Nov-2021, 21:12
Tweet 4/10 👇 Next tweet 👆 Start
@marknca tweeted at 11-Nov-2021, 21:12
Tweet 5/10 👇 Next tweet 👆 Start
@marknca tweeted at 11-Nov-2021, 21:12
Tweet 6/10 👇 Next tweet 👆 Start
@marknca tweeted at 11-Nov-2021, 21:12
Tweet 7/10 👇 Next tweet 👆 Start
@marknca tweeted at 11-Nov-2021, 21:12
Tweet 8/10 👇 Next tweet 👆 Start

@marknca tweeted at 11-Nov-2021, 21:12
Tweet 9/10 👇 Next tweet 👆 Start
@marknca tweeted at 11-Nov-2021, 21:12
Tweet 10/10 👇 Next tweet 👆 Start
@marknca tweeted at 11-Nov-2021, 21:12
DevOps has taken the development world by storm. And why not? It’s a fantastic way of working and the cloud has taken that to an 11.
Now DevSecOps is a thing…isn’t it? Security is a huge concern of everyone. No one really debates that. The debate is how to improve security overall.
In this video, we look at the biggest mistake security teams make when trying to get involved in modern development with DevSecOps.
Feel free to join the discussion on LinkedIn or in the YouTube comments.
I see security teams making the same mistake over and over again when it comes to “shifting left.” It’s frustrating from afar and infuriating when you have to deal with it day-to-day.
Let’s dig in to the disaster that is DevSecOps…
[00:15]
Imagine for a minute, you’re in your kitchen preparing dinner. You’re a reasonably good home cook. More often than not, what you put on the table is enjoyed by those you’re sharing with it.
Sure, every once and a while you miss. But that’s the rare case, so when it does happen everyone smiles, you laugh, and then place an order for take out. Mistakes happen.
Not too bad, right?
[00:29]
Now, let’s say while you’re getting ready to sit down for a wonderful home cooked meal, you neighbour invites themselves in. They immediate start hammering you with questions like, “How sharp is that knife?”, “Do you know who grew that broccoli?”, “Are there too many ovens in this neighbourhood?”
Taken aback, you politely ask, “Um, are you a professional chef? Do you have a lot of experience cooking?”
They reply, “Oh no, I don’t even have a kitchen in my place. I just order food every once and a while.”
That’s basically the scenario I see play out in organizations around the world.
The development teams and builders are working to solve business problems and address customer needs.
Then the security team shows up out of no where and starts asking seemingly irrelevant questions and demanding that priorities change in the name “reducing risk” and “improving the overall security posture” without understanding what you’re working on or how you work.
[01:37]
This is why even the name DevSecOps frustrates me to no end. The DevOps philosophy already assumes that you want to build a resilient, reliable system. There’s no need to jam another acronym in there.
Teams know that security is important, they just need the information and support to make smart decisions at the right time.
So is this whole “shift left” thing doomed?
No.
Not if you do it well.
[02:06]
If you’re on the security team, the first thing you need to understand is that you probably don’t understand how the builders are working.
You can fix that.
Spend some time with them. Ask lots of questions to better understand their workflow and concerns.
Most important of all, make sure that the information from security tools that shift left provide information with the proper context and enough data for teams to make an informed decision.
[02:34]
Just because it’s a security priority, doesn’t mean it’s a business priority.
For developers and builders, understand that security controls can provide real value to you. The whole goal of these controls is to make sure things work as intended.
Network security tools look for malicious activity and malformed traffic. You don’t want that anywhere near your app.
Threat detection on your servers and containers is looking for errant processes and other indicators of compromise. This makes sure that your resources are only working for you instead of doing things like mining cryptocurrency for cybercriminals.
Posture management—ugh, horrible name—looks at the cloud services you’re using to make sure that you have configured them in a way that matches your risk appetite.
Vulnerability scanners look at your tech stack trying to find known issue before so they don’t bite you in the you-know-what.
[03:26]
Everything on this list and most of the other security controls out there can dramatic HELP you meet your goals.
With that understanding, you need to make sure that you have access to the outputs of these tools. You need to know that they are in place and doing their job, so that you can focus on other parts of yours.
By now, you’ve figured out that the number one mistake I see security teams making when they “shift left” is IGNORING the developers and builders.
For some reason, security teams assume that to “shift left” means doing their isolated security work earlier in the development process. That’s an archaic way of thinking.
[04:05]
To truly shift left, you need to leverage the capability of security tools and processes to help developers and builders identify risks with their systems earlier in THEIR processes.
This data will help the teams make informed decisions about what actions should be taken to meet the business goals.
Shifting security left can help reduce the risks to the business while improving the quality of the systems your build.
Who wouldn’t want that?
]]>The Amazon Builder’s Library is a great set of deep dive papers into the challenges with modern systems. This post highlights some of the challenges in dealing with mult-tenant systems.
“Fairness in multi-tenant systems, looks at the challenges of balancing loads within multi-tenant systesm. Specifically, issues around handling API requests to these systems.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 2/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 3/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 4/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 5/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 6/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 7/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 8/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 9/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 10/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 11/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 12/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 13/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 14/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 15/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
Tweet 16/16 👇 Next tweet 👆 Start
@marknca tweeted at 10-Nov-2021, 14:00
The Amazon Builder’s Library is a great set of deep dive papers into the challenges with modern systems. This post highlights some of the challenges that the retry pattern presents.
The paper, “Making retries safe with idempotent APIs, follows-up yesterday’s thread on the, “Timeouts, retries, and backoff with jitter” paper.
This one takes a much deeper dive into the challenges that a simple retry poses to an API. It’s all about balancing the customer experience with the systems’ stability & performance.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 09-Nov-2021, 13:39
Tweet 2/9 👇 Next tweet 👆 Start
@marknca tweeted at 09-Nov-2021, 13:39
Tweet 3/9 👇 Next tweet 👆 Start
@marknca tweeted at 09-Nov-2021, 13:39
Tweet 4/9 👇 Next tweet 👆 Start
@marknca tweeted at 09-Nov-2021, 13:39
Tweet 5/9 👇 Next tweet 👆 Start
@marknca tweeted at 09-Nov-2021, 13:39
Tweet 6/9 👇 Next tweet 👆 Start
@marknca tweeted at 09-Nov-2021, 13:39
Tweet 7/9 👇 Next tweet 👆 Start
@marknca tweeted at 09-Nov-2021, 13:39
Tweet 8/9 👇 Next tweet 👆 Start
@marknca tweeted at 09-Nov-2021, 13:39
Tweet 9/9 👇 Next tweet 👆 Start
@marknca tweeted at 09-Nov-2021, 13:39
In late 2017, Capital One did an AWS “This is My Architecture” video. The video talks about how they built Cloud Custodian and how that tool helps them enforce policies in the AWS Cloud.
Now, a few years later, I react to that video and see what’s stood the test of time, what could be done simpler given today’s technology, and generally critique the design against the AWS Well-Architected Framework.
The AWS Well-Architected Framework is designed to help you and your team make informed trade offs while building in the AWS Cloud. It’s built on five pillars;
There pillars cover the primary concerns of building and running any solution. And as much as we’d all love to have everything, that’s just not possible.
…enter the framework.
It’ll help you strike the right balance for your goals to make sure that your build is the best it can be now and moving forward.
I often get asked why I talk about building in the cloud and architectural choices so often…aren’t I a security person?
Yes, I do focus on security and architecture is a critical part of that.
There’s really two types of security design work. The first is when you’re handed something and need to make sure the risks of that technology matches the risk appetite of the users.
The second type is when you’re building the technology. This is where making choices informed by security early in the process can have profound effects. You’re no longer bolting security on but building it in by design.
That’s why I talk about architecture and building so much. It’s where we all can have the largest possible security impact!
This video—and the ones that will come after—looks at a specific set of design decisions and how they balance the concerns of the AWS Well-Architected Framework…where security is one of the five pillars.
Capital One was an early leader in cloud. They quickly realized that it was hard to enforce various security and compliance policies in the AWS Cloud.
In order to solve that problem, they built Cloud Custodian.
This open source tool allows you to write policies in a simple domain specific language (DSL) and then enforce them entire on a schedule or as a serverless design pattern.
Capital One open sourced the tool and since then, it’s taken on a life of it’s own. It’s wildly popular and with good reason. It’s simple to use and addresses a key pain point for teams who want to build well in the cloud.
Learn more in the reaction video 👆.
Btw, I’ve updated my course, “Mastering The AWS Well-Architected Framework” on A Cloud Guru. If you want a solid walk through of the ideas behind the framework and how to apply it to your work in the AWS Cloud, check it out!]]>
The Amazon Builder’s Library is a great set of deep dive papers into the challenges with modern systems. This post highlights some of the challenges in dealing with failure at scale.
“Timeouts, retries, and backoff with jitter, looks at various types of failures and their potential impact on both your service and it’s consumers.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 08-Nov-2021, 14:13
Tweet 2/6 👇 Next tweet 👆 Start
@marknca tweeted at 08-Nov-2021, 14:13
Tweet 3/6 👇 Next tweet 👆 Start
@marknca tweeted at 08-Nov-2021, 14:13
Tweet 4/6 👇 Next tweet 👆 Start
@marknca tweeted at 08-Nov-2021, 14:13
Tweet 5/6 👇 Next tweet 👆 Start
@marknca tweeted at 08-Nov-2021, 14:13
Tweet 6/6 👇 Next tweet 👆 Start
@marknca tweeted at 08-Nov-2021, 14:13
AWS has a huge library of fantastic resources. This post highlights the recently released whitepaper walking public sector organizations through machine learning best practices.
“Machine Learning Best Practices for Public Sector Organizations, walks you through the ups and downs of a machine learning practice.
While the title and positioning calls out the US Public Sector, this paper is really broadly applicable. There’s a few specific resources for the US Public Sector—like The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update—but really, only about 1% of the paper is specific to that audience.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 2/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 3/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 4/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 5/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 6/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 7/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 8/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 9/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 10/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 11/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 12/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 13/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 14/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
Tweet 15/15 👇 Next tweet 👆 Start
@marknca tweeted at 05-Nov-2021, 12:00
AWS has a huge library of fantastic resources. This post highlights the recently released whitepaper talking about serverless multi-tier architectures.
“AWS Serverless Multi-Tier Architectures with Amazon API Gateway and AWS Lambda”, walks through how the class 3-tier—presentation, logic, data—design pattern works in a serverless world.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 2/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 3/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 4/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 5/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 6/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 7/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 8/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 9/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 10/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 11/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 12/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 13/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 14/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 15/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
Tweet 16/16 👇 Next tweet 👆 Start
@marknca tweeted at 04-Nov-2021, 12:07
AWS has a huge library of fantastic resources. This post highlights the recently updated whitepaper aligning the NIST Cybersecurity Framework to AWS.
“NIST Cybersecurity Framework, Aligning to the NIST CSF in the AWS Cloud”, does exactly what the subtitle promises.
This paper aligns the NIST CSF to the AWS Cloud.
I call out a few more details in the Twitter thread below…
@marknca tweeted at 03-Nov-2021, 10:55
Tweet 2/5 👇 Next tweet 👆 Start
@marknca tweeted at 03-Nov-2021, 10:55
Tweet 3/5 👇 Next tweet 👆 Start
@marknca tweeted at 03-Nov-2021, 10:55
Tweet 4/5 👇 Next tweet 👆 Start
@marknca tweeted at 03-Nov-2021, 10:55
Tweet 5/5 👇 Next tweet 👆 Start
@marknca tweeted at 03-Nov-2021, 10:55