<![CDATA[ markn.ca - Last 10 posts ]]> https://markn.ca https://markn.ca/content/images/size/w256h256/2025/08/favicon.png markn.ca https://markn.ca Sun, 05 Apr 2026 09:05:12 -0400 60 <![CDATA[ Security is a service business...one star ]]> For posts that explain one of my talks, you'll probably want to view them directly online at, Security is a service business...one star

This talk was delivered at AtlSecCon in Halifax, NS, on 10-Apr-2025

Abstract

When was the last time you felt like you had enough time in the day to get your work done? Are you exhausted by the never ending firehose of security challenges you have to deal with each and every day?

In this session, we are not going to change that reality. Sorry, security work is continuous, but it doesn’t have to be overwhelming.

This session looks at the workflows around your security practice and how it interacts with the business. Security is a service business, but teams are rarely set up in a way to deliver that service successfully.

There’s a lot of history that contributes to the current state of security teams, but that history typically isn’t serving a purpose. More often than not, the way we’ve built out our work leads to delays, frustrated colleagues, and eventually teams that work around us instead of with us.

This isn’t a talk about simply getting “buy in” from other leaders, it’s about breaking down our security goals and learning from other types of teams and businesses and how they are setup.

You’ll learn about the hidden challenges that impede your work, structures and workflows that can accelerate security improvements, and how to build stronger relationship with the rest of your organization.

Are your customers happy?

I'm confident that most of security professionals will answer this in one of three ways;

"I don't know."

"I don't think they are."

"No."

None of those are great answers to the question.

Do you have enough resources?

Nope.

Why are you like this?

...organizationally 😉

When was the last time you designed a process for your team?

No, I don't mean writing down an playbook (though you should be doing that). I mean working through the steps of a systematic effort in order to design a process that works for your team and your customers.

Have you ever done that?

The security team

Let's start with first principles. There's always a reason why things end up in their current state and there's a lot we can learn from that history.

Why do most security teams organize the same way? Is that the best approach? Or just something we ended up with over time due to external factors?

This all started with endpoints.

Acknowledging that there was risk with our desktops (yes, desktops), organizations started to have folks assigned to managing these systems.

Not like we do today, but the first steps were there. Organizing the OS and its updates, anti-virus software, and other steps to help protect the business.

The real nucleus of what we know of as the security team came to be with network controls. Rolling out firewalls, then intrusion prevention, and other controls around the perimeter was enough work that dedicated teams were required.

No more—well, less—side of desk work. We now started to see teams responsibly for the castle wall protecting the "inside" of the business.

As connectivity expanded, we get closer to today. Teams are dealing with endpoint, network, and cloud controls.

While each of these areas contribute to defence in depth, we also approach them based on the security team's level of responsibility or influence.

Endpoint controls are still very much in the "OK, if it doesn't impact anything" bucket. Security teams tread lightly here, so as not to lose trust with the rest of the business.

Network controls are easier to roll out because they are typically entirely within the security team's purview, or at most involve a small handful of infrastructure teams.

Deploying security controls in the cloud can be more direct. WIth all resources available via an API, connecting to systems, monitoring them, and gaining visibility are more straightforward than ever.

But there's more to security than just these three areas. We've expanded to risk practices, compliance activities, and proactive work like threat hunting.

Security teams in medium-sized enterprises, are likely to scale to have one or two—or more—dedicated resources to each of these areas. Larger organizations can even get to the point where they have dedicated teams for each of these areas.

But one thing that tends to hold true—even for the smallest of teams—is that we organize our teams based on function.

This is Francine, she is responsible for our risk practice. Jo takes care of compliance. Etc.

Functional structure

Function structures tend to exhibit these properties:

  • They allocate resources based on their functions
  • Information flows up and down easily (or least by default)
  • Decisions tend to stay within each of the functions
  • Individuals in each function will develop deep expertise in that area over time
  • Explicit workflows are required to break silos

And it's this last point that is the source of most of our challenges.

I don't think this structure is conducive to workflows that will meet your goals. Or the goals of your customers.

Worse, I don't think that we have the time/energy/awareness to step back and examine the link between our team structure and our workflows.

Simply put, we are too busy doing the work to understand how our approach to the work is making it harder for everyone.

A short activity

In this section, the audience is asked to—and politely does—participate in a group activity. They say each of the letters as they appear on screen.

A B C D

E F G

H I J K

Stop.

I'm not sure why y'all are doing it this way. Let's restart.

(In person, the audience almost always nails this part. They are saying each of the letters in English at the same time and nailing the beginnings of the song as well.)

A B C D

Stop.

I ask the audience, "Why are you saying it that way?"

They are confused. I then repeat the beginning of the alphabet in Dutch. The letter sounds are very different than the English ones.

The point of this callout is that I had very different expectations for the activity. Expectations that didn't line up with the audience's assumptions.

On the same page & language, we restart.

A B C D

E F G

I ask the audience, "How many vowels have we said so far?"

This breaks the flow of the recitation and song. It's an unexpected question, even though it's simple one to answer.

We restart for the 3rd time.

A B C D

E F G

H I J K

LMNO P

Inevitably, a North American audience will say L, M, N, and O as "elemenopee"

It's a fun call out and it runs counter to the previous pacing, but it aligns with the song.

The point of this is that it's an unspoken change that everyone just gets. They go along with it because of the ingrained cultural elements, not because they talked about it beforehand.

Everyone in the audience (or enough that the point is made) recites the alphabet using, "The ABC Song" instead of just saying each letter in turn.

There wasn't a discussion or agreement to do this. Outside of the subtle hinting in the visuals, it's just what everyone defaults to.

It's a cultural expectation. It's "the way we've always done it".

It's a direct parallel to a lot of activities in organizations and frequently the security team (me as the speaker in this case) is unaware of that expectation!

For fun, the audience gets to repeat the whole song without interruption.

A fantastic amount will also—always—add the bonus line, "Now I know my A B Cs, next time won't you sing with me?"

For a bonus, unreleased tangent, it's pointed out that most folks also can't repeat a segment of the alphabet without starting at A and ending up in a close approximation of the song as well. Human brains are weird!

What happened?

Everyone knew the song. You default to it, because you learned it and practiced it a lot as a child.

It's a shared experience that reinforces the original experience and understanding.

I restarted the group 4 times. Each time to clarify something for me or to force the group to confirm to my expectations and requirements.

That's a generally frustrating experience. While trying to fulfill my needs, I cost the group time and enjoyment.

...pausing to let that sink in...

Teams generally work well (enough) together.

Don't be the one who disrupts that.

Don't be the one who disrupts that to serve your own needs...even if those needs will help serve the group!

Self-checkouts

Let's pivot to an even more frustrating topic. But it's a topic that we can actually learn a lot from and relate to as a group.

In the beginning...

When they first rolled out, self-checkouts were hailed as technological advancement, a time saver, and an overall benefit to both the business and the customer.

There were some discussions about the balance of those benefits, but outside of the "old man yells at cloud" segment, there wasn't a lot of negativity...at first.

I bring up self-checkouts because I'd like to share a story to help illustrate my overall point of the importance of explicit service design. To help understand how we can all be more effective security practitioners, I'd like to talk to you about my local pharmacy...

Before rolling out self-checkouts about 18 months ago, my pharmacy had six checkout lines.

Each one of the checkouts was staffed. In peak times, they had six employees running the six checkout lines.

If we put ourselves in the owners shoes, the six checkouts—running at a theoretically maximum—would require about $0.59/sale in overhead.

We get to that number by looking at the number of sales each line can process during an hour and the cost to serve that line.

When the pharmacy deployed their self-checkouts, they made a couple of slight adjustments to the traffic flow.

The two middle lanes now were product shelves for those impulse buys. The back wall now housed 3 self-checkouts and so did the left-most checkout line.

The right-most checkout line was kept as a staffed line to help address any customer issues. This employee was also responsible for helping any self-service customers who encounter issues.

Now, when we adjust for the extra time it takes for self-service, the overhead drops significantly for the store.

They are pushing through less sales (120 vs 180), but at 25% of the overhead.

Given the average sale at a pharmacy these days, this probably isn't a great business move. However, the back end costs for employees are going to be significantly higher than maintaining the self-checkout systems.

The self-checkouts also don't have scheduling issues. They are always available and you don't need to try and predict demand. There's a consistency there that simplifies operations.

The problem—ok, a problem—the store encountered quickly was that four of the six self-checkouts weren't seeing much use.

The reason was simple, customers weren't seeing them!

The product displays which were thought to be a clever way to re-purpose the previously staffed checkouts, were interfering with the view of the self-checkouts.

Customers were queueing up like they used to for the staffed checkouts and not taking advantage of the additional self-checkout capacity.

When we look at the throughput from this challenge. The overhead is half of the full service approach, not a quarter of it.

That's a huge impact to the expected savings. This is a problem that needs to be solved.

The solution the pharmacy came up with was to remove the obstructions. This makes perfect sense and really opened up the area.

While it removed the ability to convert the impulse buyers, it made it a lot easier to see the entire set of checkout options.

But there was a problem...

A significant percentage of the customers for the pharmacy are seniors. Seniors who were not having anything to do with the self-checkouts.

When presented with the suite of options, the seniors overwhelmingly selected the full service option. To the point where they were queueing up when almost all—if not all—of the self-checkouts were open.

This reduced the checkout throughput of the store dramatically.

Any guests on how the store "solved" this challenge?

To address this issue, the store put up a new half wall. They physically blocked the direct access to the full service checkout.

The positive (?) aspect to this solution is that it helped to shape the queue. Instead of blocking traffic to the main shopping aisles, the queue now formed in the checkout area.

However, this block reduced the visibility of the full service checkout. The customers who wanted to use it had to now go out of their way to queue up for it...if they saw their preferred option at all.

This also doubled their walk for the workflow. They now had to walk to the queue, move to the full service checkout and then walk past all of the self-checkouts (again) to leave the store.

This is not a good solution and customers complained. To help address this, the store added an additional staff member to help guide more people to the self-checkouts.

In isolation, each of these decisions makes sense. Given problem X, solution Y is a reasonable approach. But, when you examine the overall workflow, the entire problem space, you see how ridiculous these steps are.

From the business perspective, the numbers are better. Overhead is down.

But what about customer satisfaction? This is much harder to measure. Anecdotally, as a customer, I can tell you it's down. How much will that impact their bottom line? I'm not sure.

For our purposes, the key takeaway is that even though the steps taken to address each issue were logical and moved towards the state goal, the result isn't what was intended.

And now...

It's not just my experience or this pharmacy, self-checkout has not been an amazing solution.

Through multiple iterations of the various platforms, a positive and smooth self-checkout is a very rare experience. This is now one more thing that we just put up with...despite the general feeling.

Again, this is a result of a series of logical decisions. The problem is that the context window for those decisions got smaller at each and every step.

The end result is a lot of effort and an outcome that may—or may not—align with the actual business goals.

Service design principles

While there are formal methods of doing service design, at it's core, simply asking questions and listening to the feedback will improve your team's workflow significantly.

However, the principles proposed in "This is Service Design Doing" are a great way to establish a shared understanding of what you're setting out to do.

In the simplest terms, those principles are:

  • Take the customer's perspective (human-centered)
  • Involve a diverse set of stakeholders in development (collaborative)
  • Small experiments, fast feedback loops (iterative)
  • Visualize and orchestrate the whole process (sequential)
  • Get out into the actual environment of the service (real)
  • Address customer needs sustainably throughout (holistic)

"This is Service Design Doing" is an excellent starting point. It's not the only reference out there, but it's very approachable and the Methods book is a great playbook to help you implement changes in your team.

Risk assessments

Assessment frameworks

There are a lot of different frameworks for doing risk and threat assessments. There are advantages and disadvantages to each, though really any will do.

The fact that you're conducting assessment—and regularly updating them!?!—is the most important thing.

How many folks use one of these frameworks? Or something similar?

Assessing risk

Do you conduct the assessment when the team is writing the code and building the solution?

...when they are testing the solution out?

...or maybe when it comes time to run the solution?

Trying to start and then finish an assessment just as things are going to production is far too common. We—the security team—end up in this position often because of some of the service challenges we're talking about here.

Of course the answer is that you should be doing risk assessments as a continuous process. There is assessment activity at all stages of solution development.

But, this only works if you're collaborating with the builder team. If you have the trust of other groups in the business. You have to work together and towards a common—and commonly understood—goal for this to actually work.

Getting there...

How do you end up in this utopia? This fictitious, "it's easy to put on PowerPoint" world?

The honest, open answer is, "Slowly, patiently, with a series of small steps that each get your closer to your shared goals."

Let's start by looking at the service design principles and the questions we can ask ourselves in order to start to find the path forward.

If we take the customer's perspective, we should have answers to the following questions:

  1. What are we doing this?
  2. What do I get from it?
  3. How can I make this easier?

When it comes to risk assessments (and other security work), often the answers are:

  1. Not sure.
  2. No idea.
  3. Just not do it?

Those are not great answers and they are strong indicators that we—the security team—need to be doing a much better job of communicating.

When addressing a good representation of your stakeholders, ask the following of your own team (security):

  1. Will the same process work for everyone?
  2. What are the key outcomes?
  3. Are we removing waste from this process?

Making small changes and getting feedback as quickly as possible is one of the most important things you can do for your work.

  1. When was the last time we asked if this worked?
  2. Do we gather data on our process?
  3. What adjustments have we made?

These are all questions that will help you build your feedback loops and hope you to create a truly iterative process.

In the examples we worked through today, we saw the value of taking the big picture view. Understanding the entire process is the only way to avoid the shrinking context path like we saw with the self-checkout example.

  1. Does our work start and stop at our team "borders"?
  2. How much do we know about our customers?
  3. What happens after the assessment is done?

Visualizing and orchestrating the whole process is key to breaking out of your silo. It's how you counter the limitations of the functional team structure.

Too many teams lay out their workflows based on their understanding and expectations of the customer. While it's possible that this might be accurate, it's unlikely.

Getting out and experiencing your customer's reality will help you understand their perspective. That understanding will lead you to better solutions.

My pharmacy didn't understand the majority of its customers. They missed the fundamental frustration that self-checkouts bring up with their older customers. No one wants to feel like they don't understand or that they are the problem and "don't get" the technology.

  1. Have you sat with your customers? With theirs?
  2. How often do you connect with the business?
  3. Do you know how other teams work?
  4. Have you tried their work, their tools?

  1. Are you working other make things simpler?
  2. Can you help the customer do more on their own?
  3. Is there something working really well? Can you do more of that?

Sustainability in processes is tied to complexity. Do not attempt to design a process that covers 100% of the edge cases. A workflow that solves 80–85% of the most common cases and has an allowance for the remaining 15% will be far more effective.

When making a decision, the simpler path is where you should be aiming.

The is bad

If your customers are unhappy, you have work to do. Frustrated teams work around security workflows. Not because they don't want to be secure, but because they want to get their work done.

Security is in their way. You have to avoid that at all costs.

So, do we think that the structure of our teams is influencing our workflows? And that these workflows are not serving our needs or our customers?

I do. And I think we need to change. I confident we can change and that those changes don't need to be all compassing to start.

We start by choosing to address these gaps.

We build a network of support within the business. Build understanding of how other teams work, how they communicate, and how our shared goals align.

You cannot succeed as a security team without the support of other teams in the business. The numbers simply don't add up. You need to succeed together.

The good news is that you have the same goals, you just may be speaking different languages right now or failing to share each others perspective.

You can address these challenges and improve your security by working together. And that starts with you taking a small step towards that goal.

References

]]>
https://markn.ca/2025/security-is-a-service-business-one-star/ 67f6bbeaaf961b00014432ec Thu, 10 Apr 2025 14:40:31 -0400
<![CDATA[ Let's Talk Cloud ]]> Want to share with others? Formatting issues in your reader? This post is available online, Let's Talk Cloud

As the Vice President, Cloud Research at Trend Micro, I had a mandate to educate others about cloud security and enough leeway to experiment with how I went about it.

We had a fantastic communications team who were eager to try out new platforms and new approaches. With streaming and podcasting really starting to take off, we launched a new episode show, "Let's Talk Cloud".

Right out of the gate, we knew this was going to be a learning experience for us. We kept the show simple to start with. The first show was a discussion between myself and two of our technical leaders in the field, Jeff Westphal and Fernando Cardoso.

Jeff called in from an event where he was presenting and Fernando was in one of the Trend offices. It was a very scrappy setup, but it worked. The conversation flowed well and we were able to draw in a modest live audience.

For the remaining 5 episodes in the first season, we stayed within the Trend Micro family when recruiting guests. This made it a bit easier to justify the rough edges that we were still smoothing down.

By the end of the first season, we had a reasonable smooth running show that was gaining a lot of traction. The view numbers were nice, but what was more important was how often someone—a customer, a colleague, or a random stranger—would tell me how they had watched an episode and it got them thinking.

For the next season, we were a lot more ambitious in going after guests. We had high profile guests like Forrest Brazeal, Patrick Debois, and Tanya Janca.

Sadly, I moved on from Trend Micro before I was able to film another season. However, our work on this show kicked off an ongoing series for the company. Next up was Let's Talk Security hosted by Rik Ferguson and then #TrendTalksBizSec and #TrendTalksThreats.

Sample episodes

All episodes

]]>
https://markn.ca/2025/lets-talk-cloud/ 67f6bbeaaf961b00014432db Mon, 17 Mar 2025 16:20:34 -0400
<![CDATA[ Mornings with Mark ]]> Want to share with others? Formatting issues in your reader? This post is available online, Mornings with Mark

Going back through the archives of "Mornings with Mark" has been quite the experience. I've been both fascinated and a little horrified (the hair, the look, the production…yikes) re-watching some of those nearly 200 episodes.

It's interesting to remember that back then (2018—2019), a regular, dedicated vlog focused on cybersecurity and privacy on social media was pretty rare.

"Mornings with Mark" was really a space for me to explore my thoughts on these crucial topics and share some of what I was learning while traveling and teaching cybersecurity. It was also a bit of an experiment with social media and video platforms.

I ended up regularly multi-streaming to LinkedIn (where I was part of the streaming beta program), Twitter, and YouTube. Social media was very different in 2018 and the consistency of the vlog helped grow the audience over time.

Over its run, the show averaged 250 live viewers and then another 1,000 on-demand within that week. Those numbers may seem modest, but to have that type of reach for such a simple and raw vlog was really touching.

It’s nice to know that I was able to help that many people understand security and privacy just a little bit better.

Sample episodes

All episodes

Nov/2018—9 episodes

Jan/2019—8 episodes

]]>
https://markn.ca/2025/mornings-with-mark/ 67f6bbeaaf961b00014432da Fri, 14 Mar 2025 20:41:07 -0400
<![CDATA[ My years on CBC ]]> Want to share with others? Formatting issues in your reader? This post is available online, My years on CBC

Walking past the display of Leafs memorabilia, I turned the corner, opened the door, and took a seat in the conference tucked away in the Air Canada lounge. I chuckled at the framed magazine covers adorning the walls with a who’s-who of Canada. I set my scorchingly hot Tim Hortons tea on the desk and waited to join the province-wide broadcast on CBC Radio.

In that pause, I took a moment of self-reflection and giggled quietly, half expecting a Mountie on a moose or a Québécois lumberjack with a mountain of syrup-drenched pancakes to pass by.

It felt deeply Canadian. Yes, an unbelievable, absurd, comical amount of Canadian-ness compressed into one moment, but that didn’t diminish my enjoyment. The smile that spread across my face stayed with me the rest of the day.

First steps with the network

My first appearance on the network was specifically Canadian as well. In 2014, the CBC was looking for expert commentary on how the Canadian Revenue Agency (CRA) was responding to the serious, widespread software vulnerability.

Having already started to appear in the media semi-regularly the previous year, I was a good fit for the article with my decade of experience in the Canadian public service. My commentary appeared alongside the director of the Canadian Internet Policy and Public Interest Clinic (CIPPIC) and Dr. Christopher Parsons from—at that time—the Citizen Lab.

I was humbled that my commentary was featured with such prominent experts in the field. Experts that I regularly read and still do!

That piece really sparked a passion in me. I enjoyed doing the analysis and offering a pragmatic voice on technology issues. A voice that I hoped—and still hope—helps to balance out other voices in the field.

Even then, I knew that my opinions often run counter to the louder voices that can grab the headlines. I’m ok with that. I’d rather go on the record saying something I believe in, something that I can stand behind even a decade later.

I’m also ok being that pragmatic voice. It’s not as flashy, but I believe that it can deliver more nuance and help make complex issues accessible to everyone.

Off and running

Over the next 8 years, I would appear more frequently on various CBC properties. From St. John's to Victoria, I always tried to make time to support CBC journalists and hosts who were looking to help Canadians understand what was going on in the world of technology.

I was thrilled when things started to snowball as my comments were published more frequently. This led to a regular spot on TV, appearing on The Exchange with Peter Armstrong. I also covered issues for the CBC News at 6 in cities across the country and was featured in segments on the CBC News Network channel.

Easier—logistically at least—were the radio segments. I've always been an early bird, so when I delivered a couple morning drive-time segments, I started to get called more frequently. I get it, there's not a lot of folks willing to try and distill complicated issues into something easily understandable before 8 am.

CBC Ottawa Morning

Those early morning segments lead to a regular radio column on CBC Ottawa Morning. Once every couple of weeks, I would chat with the host for 6-8 minutes and summarize the news of the moment and try to contextualize it for the audience of 100,000+.

I absolutely loved the challenge of it and got a lot of joy out of helping folks in the region to better understand specific issues.

The process was pretty straight forward. Sometimes the show would reach out the day before and ask if I could talk about a news story. Other times, I would reach out and suggest a topic flying a bit under the radar.

We'd agree on a topic and I would do an initial brief to help the show's researchers start to dig in to prepare the host for the discussion. After that, I would conduct my own research and start to outline the key areas of the issue, its larger context, and try to highlight a few hooks that would help it all land.

I'd circle back to the show with a couple of bullet points to help point the conversation in a productive direction and that was really it for formal preparation. I'd make sure to study my notes and go over key points so that the conversation could flow smoothly while still being informative.

It was great practice for a workflow that continues to help my daily. Being able to identify a topic of interest and then quickly map the landscape around it has been a game changer for me.

This workflow not only satisfies my natural curiosity, but it helps me to consistently contribute to my team and my community.

Eight years of teaching and learning

From 2014 to 2022, I made over 100 appearances on air and in print for the CBC. Each and every time, I tried to help Canadians better understand how technology impacted their lives and communities.

Looking back, I can see how I’ve grown as a communicator. Starting out with safer commentary like a Timbits player taking the field for the first time. With practice, I’ve become more confident expressing my opinions and I’ve found my voice. I moved from just starting facts to crafting explanations that break down complicated issues into simpler, relatable analogies to help everyone understand.

I’ve learned the value of consistently coming back to a topic over and over again. Just because I may be a little tired of talking of security and privacy fundamentals, doesn’t mean everyone is. It’s the patient repetition, the calm explanation of the key issues that truly reaches people.

Technology is complicated. There’s no getting around that. People are hungry to understand the questions technology raises and the questions it helps to answer.

Like that Air Canada lounge seeped in Canadiana, sometimes you need to go above and beyond to get the point across. For me as a security communicator, that means finding the hook inside the story that builds a bridge for the wider audience.

I loved my time on the CBC. It helped me grow as a communicator and touched on a nostalgia I didn't fully appreciate.

Research notes

Here is a sampling of reference notes and materials that I prepared for various segments over the years. These focus in the last few years when I was active with the CBC.

I've archived them here on the site for my own memory, but also to show some of the behind the scenes process that goes into doing a regular technology column on a show.

]]>
https://markn.ca/2025/cbc/ 67f6bbeaaf961b000144313e Thu, 13 Mar 2025 12:09:00 -0400
<![CDATA[ Helping people get the most out of AWS re:Invent ]]> Want to share with others? Formatting issues in your reader? This post is available online, Helping people get the most out of AWS re:Invent

In the fall of 2012, 5,000 people gathered in Las Vegas for the first AWS re:Invent. I was there and spent almost all of my time with my laptop open, surrounded by other builders, working together to try out new techniques and tricks that we were sharing with each other.

That spirit of community was infectious. After the event, a lot of the connected we made shifted online. And year after year, I would see those friendly faces are various events around the world and we all did our best o make it back to Las Vegas in November for the biggest conference in cloud.

Monstrous growth

re:Invent grew almost too big. Every year it would expand to move venues and add more sessions. The event was scaled back in the pandemic, with the 2020 edition moving entirely online.

As the world has moved through the pandemic, the conference has grown back to it's previous size and beyond. Almost 60,000 people attended the 2024 event.

it's at the point now where I don't think the hours in the day will permit anything more to be jammed into the week...though I'm sure I'll be surprised.

What should I do?

I've participated in every edition of the conference. As a builder from the start, an AWS Community Hero for ~6 years, and now as an Amazonian. During the period of rapid growth, I started writing an annual guide to the conference.

It started simply enough. I was trying to remind myself how to prepare for a physically and mentally exhausting week. I love attending this show, seeing my friends, making new ones, and learning a ton. But, it can take a lot out of you.

I started to experiment with how I approached the conference. I figured out little tricks that made my week easier. I genuinely wanted others to get the most out of the week too.

Eight times, I published my guide, starting in 2016:

  1. 5 Ways To Get The Most Out Of AWS re:Invent 2015
  2. 5 Ways To Get The Most Out Of AWS re:Invent 2016
  3. The Ultimate Guide to Your First AWS re:Invent (2017)
  4. The Ultimate Guide to AWS re:Invent 2018
  5. The Ultimate Guide to AWS re:Invent 2019
  6. The Ultimate Guide to AWS re:Invent 2020
  7. The Ultimate Guide to AWS re:Invent 2021
  8. The Ultimate Guide to AWS re:Invent 2022

Define 'ultimate'

You'll notice that the 3rd edition of the guide introduced the adjective, "ultimate". I debated whether or not to do this at the time.

It's a bold claim and I'm deeply uncomfortable drawing attention to myself.

However, that guide is also a 19 minute read. It's comprehensive to say the least. I think the "ultimate" description is accurate. The guides quickly became a months long effort.

Not because they took that long to write, but information about the show changed in the lead up. AWS would announce the basics (where, when, etc.) and then add more details as they locked things in.

In addition to the level of details, the guides started to get a lot of attention. Each year the audience grew. People would reach out to me with great feedback and share how they had come across the guide and how it helped them.

All said, over the eight guides, more than 500,000 people read them. That's a crazy amount of people and inline with the majority of attendees.

Copycats?

While some companies did try to copy the guides, more simply wrote up their schedules and linked to my work. I really appreciated that and tried to keep things as neutral as possible.

The personal recommendation approach resonated with people. I'd like to think that it helped to seed the idea for the official AWS guides to the event. These guides were written by individuals in the community and helped a specific audience select sessions at re:Invent. I wrote the security guide for the first few times and I'm happy to see the effort continuing to this day.

Constant #protips

Looking back at the guides, there are a few tips that still hold up and probably always will:

  1. Wear a good pair of sneakers that you've already broken in
  2. Pack snacks
  3. Hydrate often
  4. Chap stick and hand cream—casinos are absurdly dry
  5. Plan ahead to eat at reasonable times
  6. Don't be shy–take advantage of being there in-person
  7. Have fun!

A fun show

The guides were a way for me to share my excitement for the show. I always feel an odd combination of exhausting and exhilaration when I attend AWS re:Invent.

There is so much to learn. So many people to connect with. It's a great reminder of the unlimited possibilities that drew me into technology in the first place.

While I don't write the guides anymore, I'm happy I did. I'm even happier that I still get to attend re:Invent—and re:Inforce!—even if it's a little more stressful helping to deliver the show vs. trying to take it all in.

Most of all, I'm glad that I was able to contribute to the amazing cloud community in a meaningful way. I'm happy I still get to contribute and more than a little relieved, those contributions don't needs 3+ months of work each year!

]]>
https://markn.ca/2025/helping-people-get-the-most-out-of-aws-re-invent/ 67f6bbeaaf961b00014432eb Mon, 10 Feb 2025 17:08:00 -0500
<![CDATA[ Some small steps towards better security communication and education ]]> For posts that explain one of my talks, you'll probably want to view them directly online at, Some small steps towards better security communication and education

Security is everyone’s responsibility. How is that supposed to work? Our teams have struggled for a long time trying to move away from reactive work to planning and building for a more resilient future.

Is that shift even possible given our small teams and the never ending stream of issues to respond to? How can you scale your security practice in any meaningful way?

Security issues are often deeply technical and nuanced. Delegating work is a constant challenge and it feels like we’re explaining the same things over and over again. Security teams are stuck.

In this talk, we’ll dive deeper in the roles security teams play within most organizations. We’ll explore the common approaches to running a security practice, what works and what doesn’t. 

Then, we’ll start to examine communication techniques that can have a positive impact. We’ll look at how you can shift your work from constant response to more impactful efforts by laying the groundwork for others to succeed.

You’ll walk away with a better understanding of the problem your team is facing and some small steps you can take now to enable other people with your organization to make better security decisions.

You are a dedicated security professional. You understand your area of expertise deeply and are working the best you can to help improve the security of your organization.

You're working on a team of like-minded individuals. While it can be challenging always facing threats and trying to help reduce risk, you generally work well together.

The challenge is that your team is accountable for the security of the organization.

But you work with a lot of teams in the rest of the business. Those teams are responsible for various business goals. They are working just as hard to meet those goals.

It can be hard to keep up.

Why is it hard to keep up?

A few years ago, a couple of different analyst firms looked at the ratios of security professionals to the rest of the business.

They found that there was about one full-time security resource for anywhere from 650 to 1,488 other employees.

That's one person responsible for the tools, processes, and output of at least 650 others. Is that even possible?

In 2022, a report from the UK Department for Digital, Culture, Media & Sport provided a similar metric.

They found that 12% of businesses with 250+ employees had 1 person responsible for cybersecurity...and that wasn't necessarily a full-time assignment.

In 2023, Venture Beat conducted a survey and found that most organizations spend just shy of 10% of their IT budget on cybersecurity.

38% of that spend was on staff. That works out to 3.8% of the overall IT budget spend on security personnel.

That sounds like a lot, but there are some of the most highly compensated individuals on staff. Good for those in the industry, still representative of a disproportionate ratio of security folks to the rest of the business.

The Venture Beat survey provides even more insights. Most of the security spending is going to infrastructure and threat intelligence and detection.

That loosely translates into outer perimeter controls and figuring out what's already causing issues within your systems. Very little directly into scaling up the security team or preventing security issues in the first place.

The result of all of this is a lot of security folks feeling burnt out. Security teams are overworked, constantly fighting fires and trying to answer why a significant chunk of the IT budget is being spent on simply not losing ground.

We should do better. Can we?

Organizational design

...or lack there of

The current workflow for most security teams is simple.

A business team has built or bought something and they want to get it into production as quickly as possible. They do have business goals to meet after all.

You, the security person, is the gate they must pass before that happens.

This works-ish. Sadly, it leads to a lot of "hero" behaviour which prevents the actual challenge from being addressed and piles more pressure on the security team members.

The fundamental challenge comes back to that ratio. There are a very limited number of security team members and way, way more business teams.

Security is almost always the slow down or roadblock for their productivity...even thought security is working at 100% or more of expected capacity.

Keeping things at a high level, there are only 2 ways to smooth out this workflow.

You can slow down the incoming work.

or

You can reduce your security goals.

No security team should accept a reduced security posture as a matter of standard practice.

We need to continue to raise the strength and effectiveness of the security posture of our organizations.

We might be able to slow the incoming work down though...we're come back to that in a few.

Now, you can add more folks to the security team. You can scale up the team to handle more work.

This can help.

But, hiring anyone is an ongoing expense (something about always wanted to be paid 😉) and it takes time for new team members to come up to speed.

And as we've already looked at, the ratio of security team members to the rest of the business is so disproportionate that it's unlikely you'd be able to get it down to anything reasonable to actually address these challenges.

This is not a path that will successfully solve this issue.

So, what approach will work?

We—the security team—need to work with our business teams to reduce the issues upstream.

We need less security issues coming to us before systems are rolled out to production.

How do we do that?

Our general approach will be to use mechanisms to reduce our manual efforts.

A mechanism (in this context) means that we're going to try and create a tool of some sort—a process, an automation, etc.—and get folks using it, all while making sure it's delivering what we actually want.

What we don't want is more process and red tape. If something isn't serving the business' end goals, get rid of it!

Mechanisms and automation

...sort of

There's a lot we could look at here, but for this talk, we're going to look at the communications side of things.

Can we change the way we communicate and reduce the amount of work our teams are receiving? Can we make it easier to communicate in a more productive way?

Yes, we usually lean into technology to solve problems. We eagerly roll out code and additional layers of systems to address issues as we come across them.

That's not necessarily a bad thing. But, more frequently that we'd like to admit, we just end up with more overhead and challenges that are harder to address because the systems we just deployed have added more constraints!

We're going to take a deeper look at a breach notification from here in Canada. Don't worry, this will be a positive example that we'll be examining to see if we can make some tweaks to improve it even further.

But let's start with a general template for a notification...

The formula for a breach notification—e.g. letting people know there was a security incident and they were affected—is very straightforward...at a high or conceptual level.

It is:

  • What happened?
  • What information was affected?
  • What have we done in response to the breach?
  • What does this mean for you?
  • More information and how to make a compliant (with a regulator, etc.)
  • Signed by a representative of the company

Remember, we're not trying to blame anyone. We're trying to learn!

We're going to dive into a breach TransLink had in 2020. TransLink is responsible for the regional transit network in metro Vancouver.

They were breached in 2020 and the entire recovery and review process took 7 months. That includes the clean up and work with the privacy regulator. The initial incident response appeared to be quite quick.

Overall, I think there communications were good. When compared to a lot of security comms, they probably should be seen as excellent.

But, I'm a bit picky and I think TransLink could've made a couple of small tweaks to really knock it out of the park.

From the TransLink primary web page for this incident:

"
In December 2020, TransLink was the victim of a cyberattack. Upon detection, we took immediate action to shut down multiple computer systems as a protective measure and launched an investigation.

Over the course of the investigation, we worked tirelessly with cybersecurity experts to understand what happened and determine what information was unlawfully accessed. We also worked with law enforcement authorities and notified the Office of the Information and Privacy Commissioner for BC.

This investigation has been a complex and time-consuming process that took months to complete. It involved extensive analysis, the use of e-discovery tools, and manual data reviews.

The privacy review concluded in June 2021.
"

As you can see, that is a solid opening. However, it does fall into some very common traps. Let's make a couple of edits...

In December 2020, TransLink was the victim of a cyberattack. Upon detection, we took immediate action to shut down multiple computer systems as a protective measures and launched an investigation.

Over the course of the investigation, wWe worked tirelessly with cybersecurity experts to understand what happened and determine what information was unlawfully accessed. We also worked with law enforcement authorities and notified the Office of the Information and Privacy Commissioner for BC.

Here is what you need to know about your information.

This investigation has been a complex and time-consuming process that took months to complete. It involved extensive analysis, the use of e-discovery tools, and manual data reviews.

The privacy review concluded in June 2021.

Why those changes?

The original was too complicated, not empathetic, and it didn't set a shared context.

The same changes we made shifted the opening to quickly state what had happened, hint at the scale of effort to respond, and then quickly dives into the number one thing the reader of the letter would want to know.

Of all the common traps the original fell into, the most egregious—yes, even in the context of a good communication, there can be things that are egregious!—is that it's written from what the organization wants you know about the situation, not what the reader wants or needs to know!

Yes, breach notifications and other security communications can be used to reduce damage to an organizations reputation. However, it's critical that you remember that both parties in this communication are victims.

The organization—TransLink in this case—was the victim of cybercrime. The intended reader of this letter were also victims of that same crime.

As long as the origination wasn't derelict in their care of the information, this post shouldn't be written with the tone of "it's not my fault!", but one that lands more along the lines of, "we are both impacted here, but let's start to fix this by focusing on you".

Let's go for a complete re-write. We'll start with a strong and direct opener written with the reader and their position in all of this top of mind.

"
In December 2020, TransLink was hacked. When we found this out, we worked as quickly as possible to protect your data.
"

Simple. Straight to the point. With the first sentence, the reader knows what this communication is about and what happened.

The second puts TransLink in a positive light and it's also—without all of the fancy terminology or long-winded explanation—an accurate description of what happened.

We continue...

"
We brought in cybersecurity experts to help. We also contacted law enforcement and the Office of the Information and Privacy Commissioner for BC.
"

This next section is primarily a regulatory requirement. They need to let the reader know that they've complied with the local privacy legislation.

But, we frame it here as a follow-up to the statement about working as quickly as possible to protect your data.

This way, it shows—in plain language—the effort that the organization went to in response to the breach.

The next line is critical and it's often missing from these types of notifications.

"
We’ve contacted the people whose data was accessed during the hack to help them.
"

Remember, the original text that we're rewriting was published on the TransLink website. It went out to everyone. That makes sense due to the scale of the breach and the nature of the organization. This agency is the regional transit authority and its work impacts everyone in the area.

We add this line as a direct answer to the question in every readers mind, "Was my data breached?". This direct statement answers that near the top, helping the reader focus on the rest of the message.

We follow that up with an explanation of what the reader can find on this page.

"
This webpage contains information about what happened. It listed what data was accessed and what steps we’re taking to try and make sure this doesn’t happen again.
"

And finally, we closing this section with a catch-all to help answer any questions the reader may have after reading the rest of the page. This is may be implied, but by stating it, the reader is reminded of the dynamic and that organization is trying to help reduce the overall risk and any potential harms that may come from the breach.

"
If you have any questions after reading this information, we’ve set up a few different ways to get in touch with us directly. Those methods are listed at the bottom of this page.
"

Again, the communication from TransLink during this incident was great. But, with a few small tweaks, I think we've improved it to focus on what matters most to their target audience.

Our updated version heads off a lot of questions by answering them directly. We also reduced the complexity of the writing making the text easier to read. We've dropped the level equivalent from about 2nd year of University to middle school level (as per the Gunning fog index). That makes the entire text much more accessible.

This approach should reduce the number of inbound requests to the organization. And it's an approach you can use internally to do the same for your team.

Clear communication can reduce your workload.

Let's look at another positive example. This one is from CISA, the Cybersecurity and Infrastructure Security Agency in the US. CISA is the national coordinator for critical infrastructure and resilience in the United States and often acts as a cybersecurity centre of excellence for their public service.

We're going to dig into their Log4j vulnerability guidance page. They got this page up quickly when Log4j went public and used it as the single source of truth for the issue. They updated repeatedly with information about the vulnerability as it came to light and made sure that the page was as comprehensive as possible.

Here's a section of the CISA page that we'll be looking at. It's solid.

But, I do want to point out one approach that may create challenges for the intended audience...

Each of the highlighted passages are technical terms or industry specific language.

That's not necessarily a bad thing. CISA was a specific target audience in mind—security experts.

However, given their position within the US public service, they are also going to have a lot of general IT folks and other various interested folks reading this too.

The question is, can we reduce the specific language without reducing the effectiveness of the writing or the technical details?

We won't go through each term point by point, but here's a quick example of what we could swap out:

- "active, widespread exploitation" => "attackers are currently using this"
- "unauthenticated remote actor" => "attackers don't need to login to use this successfully over the internet"

Yes, sometimes a longer sentence is a clearer one. When in doubt, a longer sentence with less niche terms and more straightforward language is probably going to be more effective.

This also required more context. While this page is for a specific vulnerability, it has a wide ranging impact that is crying out for more context.

The second paragraph with, "...is very broadly used in a variety..." doesn't provide enough context. Something like this might've been more effective, "Log4j is a key building block of a lot of software and most people are unaware their systems are using it. It helps developers write log information that's helpful for troubleshooting, that's why it is a part of a lot of unexpected systems."

Last example, again a positive one.

This time, we'll look at an open source project called Prowler. This is "an open-source security tool designed to assess and enforce security best practices across AWS, Azure, Google Cloud, and Kubernetes".

It's a great project and helps a lot of organizations improve their security posture.

In this example, we're going to look a specific detection from the platform and how it aims to help developers and security folks avoid a security issue.

Here's the detection information in full. It's typically delivered as a JSON object in the platform or teams will route these to Slack or some other system where they are typically working.

This is a solid detection. The description is crystal clear. The risk is well constructed and the recommendation isn't too bad.

But two things jump out at me here.

The first is the opening sentence of the risk, "The use of a hard-coded password increases the possibility of password guessing." That doesn't accurately convey the level of risk.

How much does this increase the possibility of the password being guessed? Is that actually the case with this detection? Why is this worth the time to fix?

The second challenge is the recommended fix. Sure, AWS Secrets Manager could help address the issue. But are there other approaches that would work here? Are there other secrets managers that would work?

Again, the original is solid.

But if it provided more of the why in the risk it would be more useful.

"Hard-coded passwords can be stolen by attackers or accidentally exposed in a source code repository. Avoid this pattern if at all possible, as attackers can easily compromise the account the password has access to."

Similarly, the recommendation can be expanded to help the recipient find the best solution for their situation.

"Using a tool to manage secrets—like AWS Secrets Manager—keeps passwords and other secrets out of your code. This partner makes it easier to update that information (e.g., change the password), while keeping it more secure as the function requests the password only when it's needed."

A couple small adjustments and we've reduce the dots the recipient is required to connect!

As we've seen in the examples we've discussion—and again, they are all positive examples!

We can make some small adjustments to our approach to communication to help everyone make better security decisions and help reduce the incoming requests to our team.

For communications:

  • Keep it simple
  • Focus on the reader
  • Create shared context
  • Be empathetic

Working upstream

We've talked about communications with an eye to how clearer communications can reduce incoming requests to your security team.

We're going to take that a step further and talk about education. One gap most security teams have today is a failure to help the rest of the business understand how to prevent security issues.

I'm not talking about security awareness training (don't even get me started on that) or a patch management process. I'm talking about genuinely investing the time required to help other folks outside of the security team understand how security first thinking can help them.

Your team works regularly with a number of business teams.

As we discussed in the intro for this talk, that ratio is heavily weighted towards the business teams. You can't keep up with the work coming from all of the different business teams.

One way to help with this is to recruit other folks within the organization to advocate for more security-first or security-focused decisions.

Programs that help build this type of internal community go by a few different names—Security Champions, Security Guardians, etc.—for simplicity we'll call them "Security Advocates". Folks in this group—either "officially" recognized or not, are the people that other teams lean on for security help.

Most organizations have folks filling these types of roles for a variety of specializations. Whether it's usability, performance, accessibility, a specific framework, data analysis, etc., there's always that "go-to" for a certain topic.

Even when you don't have a specific program to nurture and expand this community, this type of dynamic still manages to surface. Making it an actual recognized effort has a lot of benefit. The foremost being you can track your efforts and invest (time, money, etc.) where it's having the biggest impact.

Once you've identified these folks, you can start to shift the dynamic between your team and the business teams.

Even if you don't identify these advocates, you should try to shift the dynamic between the security team and the business teams.

Your goal as a security team should be to try and teach the business teams about security as often as possible. With few exceptions, you should try to evolve your current workflows to try and move as much of that work to the business teams as possible.

Now, I know what you're thinking. Why would other teams take on our work? Why do would we want to cede these responsibilities to those teams, what are we supposed to do?

For your work, don't worry. There is and will always be more than enough security work to go around. 🤦

For the business teams, the advantage is easy to understand. They are best positioned to understand the full context of the risk decision (what are the risks of this new feature/solution/product?) and understanding how security can help them meet their business goals, helps them to make better decisions. That improvement helps reduce the time it takes to get things out the door and meet their goals more quickly.

Remember, this is not a complete move of security decisions to the business team. The goal of this effort is to move the decisions that are best made by an informed and educated business team to that team. The security team should be contributing to organization-wide challenges and cross-team risks.

As these efforts mature, your team will do less teaching and more sharing with teh security advocates. They in turn will take on more of the teaching role.

This can happen organically. But in each case where I've seen this type of effort succeed, it's been through a well understood and funded program.

That can mean any number of things, but it's common to have some sort of incentive structure for the advocates. Whether that's perks or specific compensation rewards or a faster path to advancement. Find what works for your organization's culture and make sure that this type of program is set up so that everyone involved sees the benefit.

You may see this and think it'll never work for your organization. Business teams don't care enough about security to give it this type of prioritization. The cooperation you see today is only because teams have to deal with security (whether by regulation or policy).

When I've discussed that idea with executives around the world, I see a common problem. Most people think of security as work to stop bad things from happening. While that's part of it, that's only a fraction of the work under the security umbrella.

The goal of security is simple. It's to make sure that what you build works as intended...and only as intended.

That's a positive goal. Stopping bad things is a negative goal and it's impossible to actually track that. The positive goal is easier to get people to rally around.

When you understand that security is trying to make sure that the work a team is doing works and only does what it's supposed to, now everyone understands they are working towards the same goal!

Security and the business have the same goals.

They all want:

  • Low-risk changes to production
  • Resilient systems
  • Visibility into their data and the processes they use

To meet those goals, you need to provide the why.

Why does this request matter? Why is this risk an issue?

If you help people understand the why, they can make better decisions moving forward. We want people to think through each situation that comes up. Technology is too complicated to map out each potential challenger beforehand.

If people understand the context of a requirement, they can make better decisions. As the expert, it's up to you to provide that understanding.

Remember, that you are the security expert. No one shared your context. You have a broad understanding of the thread landscape, the controls within your organization, and the overall risks the business is trying to balance.

The business teams are just trying to get their work done! They have goals they are working towards and are trying to navigate the various systems and processes to the best of their abilities. They are experts in something else entirely and should not be expected to be or become security experts.

Your goal is to make security frictionless. Or maybe a better call out is your goal is to use fiction judicious, helping other people make better decisions.

How can you start? Here are a few ideas for some simple techniques to get the ball rolling:

  • Open office hours
  • Review design docs and ask questions
  • Record quick video explainers for security questions
  • Join team channels and learn!

Let's a take look a how the business team and the security team approach the same issue.

There was a vulnerability in the popular django python framework in 2022. This framework is used to help build web apps and APIs. The vulnerability was an SQL injection—sending bad database requests to generate unexpected results—that could expose data that shouldn't be available.

This was an important issue to fix, but not an emergency. Think weeks, not days.

If we put on our security hat, we see that...

Risk
Exfiltration of all data in connected database

Ease of exploit
Simple. Crafted string input will start attack

Recommendation
Patch all instances of django with available patch to address issues

Likelihood of exploit
???

Recommended timeline
As soon as possible

With our builder/business had on...

Risk
Attackers get all of the data in the django database

Functions impacted
Trunc() & Extract()

Issue
Trunc(kind) & Extract(lookup_name) fail to properly sanitize input

Impact to our app
One use of Trunc() in codebase currently

Recommended timelines
Low priority. Combine with future djano updates

If we line up these perspectives—by working together as we've discussed—here's where we end up:

Impact to our app
Trunc() used once...as of our last code scan

Risk
Access to the entire database

Risk of the fix
Untested changes to access APIs

East of exploit
Simple. One bad input

Recommended timelines
Mitigate now with process changes. Patch within four weeks

Keys

Remember, most security teams are feeling the crunch. They are overloaded and under budget pressures.

A lot of that has to do with the fact that a small number of security professionals are accountable for the security of a large number of business teams!

Often security is blocking other work and tries to work harder to solve the problem.

Focus on building out mechanisms that help reduce the manual effort required to do any security work.

However, focusing first on clear communications can help free up resources because you're helping everyone in the organization to better understand security and specific issues without fielding individual questions.

Streamline the work your team does receive by aiming to reduce issues upstream. By education business teams so that they can make stronger security decisions, you'll reduce what falls to your team to handle.

Communicate

  • Keep it simple
  • Focus on the audience
  • Create shared context
  • Be empathetic

Educate

  • Provide the why
  • Security is one priority
  • You have the same goals
  • Be empathetic

Thank you!

]]>
https://markn.ca/2023/some-small-steps-towards-better-security-communication-and-education/ 688a59e4b62da30001ff52f4 Thu, 20 Apr 2023 10:00:00 -0400
<![CDATA[ The Failure of the Most Desirable Security Control Ever ]]> Want to share with others? Formatting issues in your reader? This post is available online, The Failure of the Most Desirable Security Control Ever

Today Twitter removed the legacy verified checkmark from the platform. Maybe. But also, maybe not?

Who really knows at this point, the specifics and the timeline will be whatever they will be given the current state of the platform. What is crystal clear is that the blue checkmark has now shed the last hints of its original use as a security control.

Are You, You?

The original verified checkmark was either the result of a lawsuit or coincidentally an intention feature launched at the same time. The goal was simple; provide a visual indicator that an account was in fact the person or organization it presented as.

At the time—and somehow once again—there was a flood of accounts pretending to be other people on the network.

This process went through a few iterations and had it’s share of challenges. Throughout it’s history, one thing remained constant, the process was a manual verification of the account.

Just Notable Enough

When I applied—yes, I was a legacy—I submitted a copy of my identification and a list of public references in various media publications.

I was denied the first time I applied. Not notable enough.

What did that mean? There was no clear answer. This was a problem throughout the program’s lifecycle. What was this mysterious “notable” bar?

A few weeks later, I re-applied with an expanded list of public references and was verified. At this point in my career I was regularly appearing on CBC television and radio as technology expert and being interviewed about cybersecurity issues by various media outlets around the world.

This work aligned directly with the goal—at that point—of the program. If someone saw me on TV or quoted in an article, they could have a reasonable assurance that tweets from @marknca were from me.

The Rules

Part of the reason that assurance held up was the rules set out for the accounts with the blue checkmark.

Your profile picture had to be a clear picture of you. While your handle could be anything, your display name needed to be your actual name. You also had to link your phone number to your account—though it was not visible publicly.

You could change your profile picture, but it was supposed to always be a clear picture of you. Any changes to your display name could result in a loss of verification or a re-verification process…though this seemed to be rarely followed up on.

The rules were slightly different for organizations and brands. It wasn’t a perfect setup, but it did a reasonable job of reaching the security goal; helping users spot imposter accounts.

An Unscalable Process

The verification process has been rightly criticized throughout it’s 14 year lifecycle. The biggest complain was the “notable” criteria. It’s always been somewhat hand wavy.

Because an account had to reach some arbitrary level of notoriety, the verification process wasn’t accessible to every user. Scenarios where verification could prevent real harm weren’t addressed and users had to find help—if they could—through other abuse reporting mechanisms on the platform.

At the time and in retrospect, the program would always have this issues because of the manual aspects of verification.

When Twitter launched the program, the platform had somewhere around 35 million active users. Even then, the process wouldn’t scale. Let alone as the platform reached the 450 million users in 2023.

If we do some napkin math, let’s say a full time employee can ‘verify’ 100 accounts per week (about 15 minutes an account). That one employee, only working on verification, will process about 3,800 accounts in a year.

That was 0.0001% of the platform in 2009. And 0.000008% of the platform today.

It was never going to cover everyone.

Social Status

That failure to scale turned this critical security control into a status symbol. The blue checkmark shifted from being a way to ensure that this account was valid to a social status indicator.

“This account is important.” was the takeaway as verified accounts became a smaller and smaller percentage of the platform.

This was made even more pronounced when the program was closed to new applicants a number of times over the years. In the end, only about 420,000 accounts were verified through this process in over a decade. That’s around 0.0009% of the user base.

That tiny blue checkmark shows up on the user profile page and next to their display name on every single one of their tweets. When a public indicator is only available to a tiny fraction of users, it’s not surprising that it become a desirable badge for everyone on the platform.

Security Failure

Now the blue checkmark is part of the paid, Twitter Blue, subscription. The challenge here is one of mismatched perception. There are plenty of stories showing the impact of shifting the visual indicator away from a identify verification to payment verification.

What’s interesting is looking at this from a security point of view.

I can’t think of any other case where a security control has shifted it’s utility so completely.

The underlying expectation by users—at least in the first few months of the new Twitter Blue era—was that the blue checkmark meant an account wasn’t an impostor.

The reality was that the blue checkmark meant the account had paid a subscription fee…or it might not be an impostor.

This is the worst type of security situation. Users are left in the dark as to what an indicator actually means.

If you followed the news about Twitter and took the time to view the profile of the user in question, you could see that the blue checkmark was from the—now—legacy verification program.

If you didn’t take those steps and were simply operating under the same assumption you had been for the past 14 years, you could easily draw the wrong conclusions about the account. Believing that it was legitimate when it was in fact not.

Clarity Is Key

Security is rarely simple. It would be wonderful if decisions sorted neatly into “secure” and “not secure”, but the reality is that almost any decision can be the best security decision if it’s made with a solid understanding of the trade offs being made.

Security decisions are all about context.

The fundamental challenge with the blue checkmark on Twitter for the past few months—and most likely, for years to come—is that it is trying to balance two completely different context.

The first, a strong assurance that this is who you think it is.

The second, that someone is paying a monthly fee to display a little digital icon next to their name.

That a security control became so desirable is an interesting case study. The challenge is that as it shifted to become a status symbol, it’s become the worst type of control. One that no longer provides any tangible security benefits, but it still widely believed to do so.

]]>
https://markn.ca/2023/the-failure-of-the-most-desirable-security-control-ever/ 67f6bbeaaf961b00014432d3 Sat, 01 Apr 2023 10:00:00 -0400
<![CDATA[ ChatGPT Delivers Ideas and Answers on Demand, If You Know How To Ask ]]> Want to share with others? Formatting issues in your reader? This post is available online, ChatGPT Delivers Ideas and Answers on Demand, If You Know How To Ask

Robyn Bresnahan and I chatted about this on CBC Ottawa Morning on 13-Dec-2022.

Machine learning (ML) and artificial intelligence (A.I.) have moved from sci-fi to buzzword to everyday tool. The latest models are making main stream headlines.

Creations from custom art generators like Dall-E 2, Midjourney, and Stable Diffusion have been trending on social media globally by capturing everyone’s imagination. Targeted solutions like custom AI profile avatars and marketing copy generators are finding audiences by creating unique solutions to very specific creative challenges.

Each of these solutions is a type of generative machine learning model. The latest to catch everyone’s attention? ChatGPT.

Machine Learning Models

At the heart of machine learning are models. A model is basically a computer program. But instead of the program delivering very direct results (e.g., click this button to load this webpage), these programs try to predict or infer answers to questions in a specific problem space.

A model is trained on a large amount of data. That training process allows the model (or program) to make connections and identify patterns between various data points. Those patterns are key to the success of the model.

Given new, similar data, the model uses those patterns to predict or infer a result.

Let’s say you train the model on pictures of all sorts of different pets. Dogs, cats, birds, fish, etc. This training is clearly labeled. So, the program looks at the first photo and it is told, “This is a picture of a dog.” Repeat that several hundred thousand or even millions of times.

That builds the pattern of “A dog usually has four legs, fur, a tail, and a long nose.” Another pattern of, “A fish is always in water, has fins, no arms or legs, gills, and a larger tail.”. And so on.

Now, when the program is run, the user gives the model a new picture it’s never seen before. Essentially asking, “What pets are in this photo?”

The model then uses the patterns it knows about to infer what’s in the picture, returning the result of “This is probably a cat and this is probably a dog.” The answer is probably because the model is using inference to basically guess what’s in the picture. It doesn’t actually know what’s in the picture, but based on other picture’s it’s been trained on, these are the most likely answers.

Generative Models

The latest wave of machine learning models work in reverse. Instead of giving them a picture and asking what’s in it, you provide a prompt and the model infers what that might look like or sound like.

Here, I used the prompt, “An odd couple pairing of a cat and a dog. Hayao Miyazaki style, digital painting”

Dall-E prompt, &ldquo;An odd couple pairing of a cat and a dog. Hayao Miyazaki style, digital painting&rdquo;

Based on what the model—in this case, Dall-E 2—knows about cats, dogs, and Hayao Miyazaki’s art style, this is what it came up with.

ChatGPT is a wrapper around the GPT-3 generative large language model. It does the same thing as Dall-E, but with text. You provide a prompt and it will deliver a writing sample to match. When I asked ChatGPT to describe itself, this is what it produced…

ChatGPT is a language model that’s been trained to chat with you. It’s based on GPT-3, which is a really advanced model. ChatGPT can generate responses that sound like they’re coming from a real person, so it’s great for things like chatbots or text-based dialogue systems. It’s not a specific product or tool, but more like a type of model that you can use for different applications.

The results can be quite impressive when they are accurate. Remember the model is designed to produce passable prose, not perfectly accurate text content.

ChatGPT takes things a step further and provides a discussion-like interface to it’s model. Using that interface, I explored the key concepts behind this technology. You can read the results of that below in, “A Quick Chat With ChatGPT

What Problems Does ChatGPT Solve Well?

ChatGPT isn’t the end-all, be-all of generative AI. It isn’t going to replace human creative or human effort. It can help accelerate your work though. Allie K. Miller puts together a great list of areas where ChatGPT excels in this Twitter thread 🧵👇

ChatGPT is changing the game, and I want to share real things you can do with this AI system today.

Please save this thread and start testing this technology NOW so you’re ahead of the curve.— Allie K. Miller (@alliekmiller) December 13, 2022

I agree with most of her key areas, specifically:

  • Summarizing
  • Planning
  • Idea generation

ChatGPT is a wonderful way to get past staring at a blank page! You can check it out at https://chat.openai.com/chat.

A Quick Chat With ChatGPT

Mark: What is machine learning?

ChatGPT: Machine learning is a type of artificial intelligence that allows software applications to become more accurate at predicting outcomes without being explicitly programmed. It is based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention.

Machine learning algorithms use statistical techniques to find patterns in data and use them to make predictions or take actions. This can be used in a wide range of applications, such as image and speech recognition, natural language processing, and even in self-driving cars.

Mark: How do I make my own model?

ChatGPT: To make your own machine learning model, you will need to gather a large amount of data that is relevant to the problem you are trying to solve. This data will be used to train the model, which is the process of feeding the data into the algorithm and adjusting the parameters of the model to improve its accuracy. Once the model is trained, you can then use it to make predictions or take actions based on new data.

There are many different algorithms and techniques that can be used for training a machine learning model, and the choice of which to use will depend on the specific problem you are trying to solve. Some common algorithms include decision trees, support vector machines, and neural networks.

To create a machine learning model, you will need to have some knowledge of programming and statistics, as well as a basic understanding of the problem you are trying to solve. There are many tutorials and online courses available that can help you learn the basics of machine learning and get started with creating your own model.

Mark: What is the difference between a discriminative and a generative model?

ChatGPT: A discriminative model is a type of machine learning algorithm that is used to predict the value of a target variable (such as whether an email is spam or not) based on a set of input features (such as the sender, subject, and content of the email). It does this by directly modeling the relationship between the input features and the target variable, and using this model to make predictions on new data.

A generative model, on the other hand, is a type of algorithm that is used to learn the underlying distribution of the data, and can then generate new samples that are similar to the data it was trained on. This allows the model to capture the underlying patterns and structure of the data, and can be used to make predictions or take actions based on this learned information.

One key difference between discriminative and generative models is that a discriminative model focuses on predicting the target variable based on the input features, while a generative model is more concerned with modeling the underlying distribution of the data. This means that a generative model is often more flexible and can be used to solve a wider range of problems, but may require more data and computational resources to train.

Mark: What are the biggest technical challenges with generative models?

ChatGPT: There are several technical challenges associated with generative models, some of which include:

  1. Data availability and quality: Generative models require a large amount of high-quality data to learn the underlying distribution of the data and generate new samples that are similar to the training data. If the data is not sufficient or is of poor quality, the model may not be able to accurately capture the patterns and structure of the data, which can reduce its performance.
  2. Computational complexity: Generative models can be computationally intensive to train, especially for large datasets or complex data distributions. This can make them difficult to run on standard hardware, and may require specialized hardware or distributed computing systems to handle the large amounts of data and computational workload.
  3. Mode collapse: One of the main challenges with generative models is the phenomenon of mode collapse, where the model only learns to generate a subset of the data, rather than the entire distribution. This can result in poor performance and a lack of diversity in the generated samples.
  4. Evaluation and comparison: Generative models can be difficult to evaluate and compare, as they are often evaluated on their ability to generate realistic samples, rather than on a more traditional metric such as accuracy or error rate. This can make it challenging to determine which model is the best for a particular task or dataset.

Mark: How can i address bias in my models?

ChatGPT: Bias in machine learning models can arise from several sources, including the data used to train the model, the algorithm used to build the model, and the way in which the model is used in practice. To address bias in your models, there are several steps you can take, including:

  1. Collect and use a diverse and representative dataset: Ensuring that the data used to train your model is diverse and representative of the real-world population can help to reduce bias in the model. This means using a dataset that includes examples from a wide range of individuals, with varying characteristics such as age, gender, race, and socio-economic status.
  2. Use fairness metrics and algorithms: There are a number of fairness metrics and algorithms that can be used to measure and address bias in machine learning models. These can help to identify potential sources of bias in your model and provide suggestions for how to mitigate them.
  3. Regularly evaluate and test your model: It is important to regularly evaluate and test your model to ensure that it is performing well and not introducing any bias. This can be done by conducting tests on the model and comparing its performance on different subgroups within the population.
  4. Be transparent and accountable: Being transparent about the data and algorithms used to build your model, as well as the assumptions and limitations of the model, can help to promote accountability and trust in the model. This can include sharing information about the model’s performance and any measures taken to address bias.

Mark: What is the difference between training and inference?

ChatGPT: Training and inference are two distinct phases in the development and use of a machine learning model. Training is the process of building a machine learning model by feeding it a large amount of data and adjusting the model’s parameters to improve its performance.

This is typically done using a labeled dataset, where the input data and the corresponding target values are known. The goal of training is to find the model parameters that best fit the data and enable the model to make accurate predictions on new, unseen data.

Inference, on the other hand, is the process of using a trained model to make predictions or take actions on new data. This involves providing the model with input data and using the model to generate predictions or take actions based on that data. Inference can be performed using either the same data as was used for training, or new, unseen data.

The main difference between training and inference is that training involves building and adjusting the model, while inference involves using the trained model to make predictions or take actions on new data. Training is typically done offline, while inference is typically done in real-time.

Mark: What are some examples of problems that machine learning is ideally suited to solve?

ChatGPT: Machine learning is well-suited to a wide range of problems, including:

  1. Image and speech recognition: Machine learning algorithms can be trained to recognize objects, people, and spoken words in images and audio recordings. This can be used in applications such as face recognition for security systems, or speech-to-text for voice-controlled assistants.
  2. Natural language processing: Machine learning algorithms can be used to process and understand human language, which can be applied to tasks such as language translation, sentiment analysis, and automatic summarization of text.
  3. Fraud detection: Machine learning can be used to detect patterns and anomalies in large datasets, which can be useful for identifying fraudulent transactions in financial systems, or for detecting spam emails.
  4. Predictive maintenance: Machine learning can be used to predict when equipment or machinery is likely to fail, which can help to prevent disruptions and costly downtime.
  5. Recommendation systems: Machine learning can be used to personalize recommendations for products, music, or movies based on a user’s past behavior and preferences.

These are just a few examples of the many problems that machine learning can be used to solve. Machine learning algorithms are highly flexible and can be applied to a wide range of problems in many different fields.

]]>
https://markn.ca/2022/chatgpt-delivers-ideas-and-answers-on-demand-if-you-know-how-to-ask/ 67f6bbeaaf961b000144313d Wed, 14 Dec 2022 09:00:00 -0500
<![CDATA[ AWS re:Invent 2022 Attendee Guide: Security ]]> Want to share with others? Formatting issues in your reader? This post is available online, AWS re:Invent 2022 Attendee Guide: Security

At re:Invent or attending remotely? Check out my “Ultimate Guide to AWS re:Invent 2022” post for some tips and tricks to get the most out of the show.

For the past few years, I’ve had the opportunity to write the “official” guide for attendees of AWS re:Invent that are interested in security. For AWS re:Inforce—the security-focused conference—I get to be a bit more specific.

Each time I create one of the guides for re:Invent my goal is the same; help everyone understand the foundations of cloud security. Regardless of where you are on your cloud journey, there is a ton of value in strong fundamentals.

…and the fundamental truth of cloud security is that it isn’t an isolated discipline. You need to understand how builders are solving problems in the cloud, what AWS has done to deploy strong services, and what options there are available for you to help with your security practice.

Here is the security topic guide for AWS re:Invent 2022 with links to all of the recommended sessions. You can also watch the sessions back-to-back using this handy YouTube playlist.

Introduction

Security is often misunderstood.

That’s understandable given how it’s usually presented. Let’s fix that.

Simply put, security is one of the main areas of focus when building in the AWS Cloud. It works with the other pillars—Operational Excellence, Performance Efficiency, Cost Optimization, Reliability, and Sustainability—to help you deliver robust, resilient solutions.

I’ve built this guide with that approach in mind. These sessions will help security professionals take a cloud-native approach. It will also help builders of all experience levels understand how security fits into the bigger picture.

There’s a ton of great content this year and these sessions are a great starting point for your security journey and a good reminder of the foundations for those of us further down the path.

ARC210, The well-architected way

Quality service delivery is fundamental to a strong security posture. The principles in the AWS Well-Architected Framework will help your teams build better in the cloud.

CMP210, Modern applications: An infrastructure perspective

EC2 is usually a major part of your compute infrastructure. This session focuses on how to EC2 works at a fleet level. Understanding this is critical to expanding your security efforts beyond the instance itself.

CON323, What’s new with Amazon ECS

As happens sometimes, this session was renamed and adjusted to focus on new features in Amazon ECS. The original session name was, “The AWS container services journey: A detailed look”

Containers are ubiquitous. So much so that there are a lot of AWS services that can run them. This session explains the use cases for these services and how your teams can use them to their advantage.

COP301, Observability the open-source way

Observability has massive benefits for all builders, security teams doubly so. Learn about metrics, traces, log data, and OpenTelemetry and how it can help improve visibility into what’s happening in your builds.

COP302, Developing an observability strategy

You have access to mountains of data about how your builds are behaving. This session will help you understand what strategies work to manage that data and how to get the most out of it.

This session hasn’t been published yet

COP304, Cloud compliance, assurance, and auditing

If there is a “trick” to compliance, it’s building evidence collection into everything you do. The good news? A lot of the work you’re doing around observability helps with that. This session will show you how.

COP316, How to monitor applications across multiple accounts

The more accounts your teams use, the more you realize you need a strategy for visibility across accounts. Focusing on application monitoring, the lessons from this session are directly applicable to security.

DOP301, The Amazon Builders’ Library: 25 years of operational excellence at Amazon

The Amazon Builders’ Library is a gold mine of hard fought knowledge about building in the cloud. This session takes a deep dive into how Amazon runs technical operations and what they’ve learned.

This session hasn’t been published yet

ENT218, The architect elevator: Connecting IT and the boardroom

You can’t communicate the same information to a development team and your executives. The contexts are completely different. This session maps out how to tackle those differences effectively.

ENT220, Shipping securely: How strong security can be your strategic advantage

This session explains the business advantages to a strong security approach. Take notes, lots of notes, and use this within your organization. Strong security helps everyone reach their goals faster.

This session hasn’t been published yet

NET207, A deep dive on the current security threat landscape with AWS

This session was originally titled, “2022 AWS Threat Landscape Report”

AWS Security has a unique perspective on the threat landscape. Learn about what they are seeing and how it impacts your organization…or more importantly, doesn’t impact your organization because of their hard work.

SEC201, Proactive security: Considerations and approaches

AWS’s approach to security is one worth replicating. The way they’ve distributed responsibility throughout teams that are building in the cloud is highly effective. Learn how you can do the same in this talk.

SEC203, Revitalize your security with the AWS Security Reference Architecture

The AWS Security Reference Architecture is a good starting point for any build. This is a great technical foundation session, as it takes a closer look at the why and how of that design.

SEC212, AWS data protection: Using locks, keys, signatures, and certificates

Encryption is a critical security control. Looking at its application for data at rest and in transit, this session lays the groundwork for a robust data strategy that will underpin your entire security strategy.

]]>
https://markn.ca/2022/aws-re-invent-2022-attendee-guide-security/ 67f6bbeaaf961b0001443266 Fri, 02 Dec 2022 09:00:00 -0500
<![CDATA[ Werner Vogels Keynote Play-by-play ]]> Want to share with others? Formatting issues in your reader? This post is available online, Werner Vogels Keynote Play-by-play

At re:Invent or attending remotely? Check out my “Ultimate Guide to AWS re:Invent 2022” post for some tips and tricks to get the most out of the show.

Thursday at AWS re:Invent 2022 brought yet another amazing keynote from Dr. Werner Vogels. As is his custom, Werner challenges builders to think more broadly about their approach, to push beyond existing perceived constraints and to build better.

My reaction to the keynote in this Twitter thread 👇.

Twitter Thread 🧵

Tweet 1/135 👇 Next tweet

what will the theme of @werner’s #reinvent keynote be this year? who’s the musical act for @AWSEvents re:Play tonight? what will be your favourite t-shirt of his? let’s find out now… /🧵

Tweet 2/135 👇 Next tweet 👆 Start

dark & stormy, Matrix-style intro video... #reinvent

Tweet 3/135 👇 Next tweet 👆 Start

“The world is asynchronous”, @Werner #reinvent

Tweet 4/135 👇 Next tweet 👆 Start

“Would you like to see a more world ordered? Synchronous?” @werner, “Nope” 🤣🤣🤣 #reinvent

Tweet 5/135 👇 Next tweet 👆 Start

the theme of this video is synchronous processing old school serial work #reinvent


Tweet 6/135 👇 Next tweet 👆 Start

“This place is crazy”, @werner “No. This place is sycnrhonous" “No, this place sucks”, @werner #reinvent

Tweet 7/135 👇 Next tweet 👆 Start

love that intro video! #reinvent

Tweet 8/135 👇 Next tweet 👆 Start

@werner up now, rocking the Matrix look to connect w/the video…then to just the straight up @awscloud Lambda t-shirt #reinvent

Tweet 9/135 👇 Next tweet 👆 Start

continuing to hit on the synchronous/asynchronous differences * I am not going to keep typing out the full word 🤣 #reinvent

Tweet 10/135 👇 Next tweet 👆 Start

original @awscloud S3 design principles included “asychrony” #reinvent

Tweet 11/135 👇 Next tweet 👆 Start

big differences between these four… - classic synchrony - parallel synchrony - asynchrony - asynchrony + parallel #reinvent



Tweet 12/135 👇 Next tweet 👆 Start

original @awscloud S3 design principles called out “controlled concurrency” and “controlled parallelism” as well! #reinvent

Tweet 13/135 👇 Next tweet 👆 Start

cool real world examples: bird flock/swaming - looks like central control, isn’t - simple rules for each bird creates this effect #reinvent

Tweet 14/135 👇 Next tweet 👆 Start

some of this is covered in “Swarm Intelligence” by Eberhart, Shi, and Kennedy, https://t.co/XpIzYQDcVt #reinvent

Tweet 15/135 👇 Next tweet 👆 Start

👆 that reference is from me, not Werner 😉 #reinvent

Tweet 16/135 👇 Next tweet 👆 Start

“The world is asynchronous”, @werner #reinvent

Tweet 17/135 👇 Next tweet 👆 Start

Synchronous is… - a simplification - a conviencance - an illusion #reinvent


Tweet 18/135 👇 Next tweet 👆 Start

“Systems are asynchronous”, @Werner #reinvent

Tweet 19/135 👇 Next tweet 👆 Start

@werner currently walking through some early examples of systems that were built around async principles #reinvent

Tweet 20/135 👇 Next tweet 👆 Start

“Synchrony leads to tightly coupled systems” …and we all know (or should know) how @werner feels about that type of system design! #reinvent

Tweet 21/135 👇 Next tweet 👆 Start

“Asynchrony leads to loosely coupled systems”, @werner #reinvent

Tweet 22/135 👇 Next tweet 👆 Start

benefits of async / loose coupling: - fewer dependencies - failure isolation - evolvable architecture #reinvent

Tweet 23/135 👇 Next tweet 👆 Start

@awscloud S3 at launch was 8 micro services, it’s grown to 235+ now the loosely coupled design allowed for this type of evolution over time #reinvent

Tweet 24/135 👇 Next tweet 👆 Start

again, this was on the 2006 original design principles for @awscloud S3 #reinvent

Tweet 25/135 👇 Next tweet 👆 Start

at @amazon, they went through the same evolution that lots of customers are going through now moving away from a monolith to an eventual loosely coupled architecture that can continue to evolve #reinvent

Tweet 26/135 👇 Next tweet 👆 Start

@Amazon Distribute Computing Manifesto is up at https://www.allthingsdistributed.com/2022/11/amazon-1998-distributed-computing-manifesto.html it’s a fantastic read #reinvent

Tweet 27/135 👇 Next tweet 👆 Start

ooohhh, I ❤️ when @werner talks patterns! I always learn a ton these all help solve your challenges with a loosely coupled approach #reinvent

Tweet 28/135 👇 Next tweet 👆 Start

two key @awscloud services to help here #reinvent

Tweet 29/135 👇 Next tweet 👆 Start

always excellent advice: “Listen, learn, improve”, @werner #reinvent

Tweet 30/135 👇 Next tweet 👆 Start

more on @awscloud Step Functions at https://aws.amazon.com/step-functions/ #reinvent

Tweet 31/135 👇 Next tweet 👆 Start

more on @awscloud EventBridge at https://aws.amazon.com/eventbridge/ #reinvent

Tweet 32/135 👇 Next tweet 👆 Start

NEW: AWS Step Functions Distributed Map - orchestrate large-scale parallel workloads in #serverless applications - available today #reinvent

Tweet 33/135 👇 Next tweet 👆 Start

“Event-driven architectures lead to loosely coupled systems”, @werner #reinvent

Tweet 34/135 👇 Next tweet 👆 Start

more details on some event-driven patterns... (btw, all supported by @awscloud EventBridge) #reinvent

Tweet 35/135 👇 Next tweet 👆 Start

an example used often by @martinfowler, https://martinfowler.com/ #reinvent

Tweet 36/135 👇 Next tweet 👆 Start

another event-driven architecture example from @werner. more details at https://aws.amazon.com/blogs/apn/integrating-readme-with-amazon-api-gateway-to-keep-your-developer-hub-up-to-date/ #reinvent

Tweet 37/135 👇 Next tweet 👆 Start

👆 that’s the blog @werner mentioned #reinvent

Tweet 38/135 👇 Next tweet 👆 Start

❤️ the “oh, want to add to this? ok” evolution examples #reinvent

Tweet 39/135 👇 Next tweet 👆 Start

“All complex systems that work evolved from simple systems that worked”, Gall’s law quoted by @werner #reinvent

Tweet 40/135 👇 Next tweet 👆 Start

“work” is the key part of that law. make it simple. make it work. evolve from there #reinvent

Tweet 41/135 👇 Next tweet 👆 Start

damn #reinvent

Tweet 42/135 👇 Next tweet 👆 Start

fun projects are a great way to learn more details on #serverless-presso at https://serverlessland.com/reinvent2021/serverlesspresso the workshop: https://workshop.serverlesscoffee.com/ #reinvent

Tweet 43/135 👇 Next tweet 👆 Start

building to something... #reinvent

Tweet 44/135 👇 Next tweet 👆 Start

NEW: @awscloud Application Composer - visually design and build #serverless applications quickly - in preview today #reinvent

Tweet 45/135 👇 Next tweet 👆 Start

Tweet 46/135 👇 Next tweet 👆 Start

@werner stressing just how much easier @awscloud Application Composer for anyone to start building #serverless apps #reinvent

Tweet 47/135 👇 Next tweet 👆 Start

would have loved to be a fly on the wall in the marketing meeting where @awscloud EventBridge got linked to the concept of being a spider in the web 🤣 💯 true though! #reinvent

Tweet 48/135 👇 Next tweet 👆 Start

@werner talking now about the power of composing solutions using smaller pieces. shouting the UNIX philosophy around modular software | #reinvent


Tweet 49/135 👇 Next tweet 👆 Start

stdin / stdout / stderr has held up for 40+ years #reinvent

Tweet 50/135 👇 Next tweet 👆 Start

great question #reinvent

Tweet 51/135 👇 Next tweet 👆 Start

NEW: Amazon EventBridge Pipes - connected event producers and consumers in seconds - available today! “Easily stitch services together”, @werner #reinvent

Tweet 52/135 👇 Next tweet 👆 Start

I am so f—king excited for this 👆 service, I can’t even begin to explain it #reinvent

Tweet 53/135 👇 Next tweet 👆 Start

this starts... #reinvent

Tweet 54/135 👇 Next tweet 👆 Start

@werner trying to explain a ticketing app example…get’s distracted by football ⚽ #reinvent

Tweet 55/135 👇 Next tweet 👆 Start

first customer speaker up now. it’s Angela Timofte from @Trustpilot #reinvent

Tweet 56/135 👇 Next tweet 👆 Start

Tweet 57/135 👇 Next tweet 👆 Start

back on stage, Angela explains how trust is at core of what @Trustpilot is building #reinvent


Tweet 58/135 👇 Next tweet 👆 Start

must not get distracted by @awscloud EventBridge Pipes…there is more goodness to follow in this @werner keynote #reinvent

Tweet 59/135 👇 Next tweet 👆 Start

Angela walking through the @Trustpilot journey. started w/a monolith…until that was breaking things #reinvent

Tweet 60/135 👇 Next tweet 👆 Start

Christmas party outage. among the top five worst times for outages 😉 #reinvent

Tweet 61/135 👇 Next tweet 👆 Start

over time Angela & the @Trustpilot team moved to an event-driven architecture tons of benefits, has enabled the business to scale. hopefully reduced those Christmas party outages too #reinvent

Tweet 62/135 👇 Next tweet 👆 Start

side note: I like the design of these slides. nice and bold. the cards are clear for the event-driven architecture #reinvent

Tweet 63/135 👇 Next tweet 👆 Start

event-driven has made it way easier for Angela & @trustpilot team to deal w/spikes in traffic. things scale was easier and have lower ops burden #reinvent


Tweet 64/135 👇 Next tweet 👆 Start

see what I mean about the slide design? #reinvent

Tweet 65/135 👇 Next tweet 👆 Start

some key benefits to moving to an event-driven architecture: - scale, adapt, react - protect platform integrity - teams working in parallel #reinvent

Tweet 66/135 👇 Next tweet 👆 Start

nice customer segment from Angela @Trustpilot. hit the right level of technical depth. cool slides. practical example #reinvent

Tweet 67/135 👇 Next tweet 👆 Start

Angela’s Hero page: https://aws.amazon.com/developer/community/heroes/angela-timofte/ …and Twitter handle that I didn’t find in time: @AngelaTimofte #reinvent

Tweet 68/135 👇 Next tweet 👆 Start

profiles and some publications from the @awscloud Heroes at https://aws.amazon.com/developer/community/heroes/ #reinvent

Tweet 69/135 👇 Next tweet 👆 Start

“Event-driven architecture enable global scale”, @Werner #reinvent

Tweet 70/135 👇 Next tweet 👆 Start

Werner diving into @awscloud DynamoDB as an example #reinvent

Tweet 71/135 👇 Next tweet 👆 Start

speaking of @awscloud Heroes, @alexbdebrie is THE expert for DynamoDB. check out his canonical reference https://www.dynamodbbook.com/ this book has saved me and a ton of teams I know mountains of time, and headaches, and stress, and frustration #reinvent

Tweet 72/135 👇 Next tweet 👆 Start

back on stage, @werner is talking about the challenges of active-active and various failure scenarios more on DynamoDB streams at https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html #reinvent



Tweet 73/135 👇 Next tweet 👆 Start

Tweet 74/135 👇 Next tweet 👆 Start

event-driven architecture patterns: - change data capture - asynchronous coupling - self-healing replicators #reinvent

Tweet 75/135 👇 Next tweet 👆 Start

Tweet 76/135 👇 Next tweet 👆 Start

specific paper called out, “Using dependency isolation to contain concurrency overload”, by @dyanacek https://aws.amazon.com/builders-library/dependency-isolation/ #reinvent

Tweet 77/135 👇 Next tweet 👆 Start

specific paper called out, “My CI/CD pipeline is my release captain”, by @clare_liguori https://aws.amazon.com/builders-library/cicd-pipeline/ #reinvent

Tweet 78/135 👇 Next tweet 👆 Start

NEW: Amazon CodeCatalyst - unified software development service that makes it faster to build and deliver on @awscloud - in preview today #reinvent


Tweet 79/135 👇 Next tweet 👆 Start

I’m intrigued. @awscloud CodeCatalyst looks like it’s automating a massive amount of the heavily lifting that builders are struggling with as always, will need to see the edges & opinions of the service… #reinvent

Tweet 80/135 👇 Next tweet 👆 Start

very promising. @awscloud extends to other existing tools…like @github #reinvent

Tweet 81/135 👇 Next tweet 👆 Start

back to the main theme, now talking about the speed that event-driven architectures unlock for builders #reinvent

Tweet 82/135 👇 Next tweet 👆 Start

…shows the architecture for @cinchuk… #reinvent

Tweet 83/135 👇 Next tweet 👆 Start

... #reinvent

Tweet 84/135 👇 Next tweet 👆 Start

Tweet 85/135 👇 Next tweet 👆 Start

@werner currently talking about using 3d tech to get properly fitting shoes. not sure who was first with this, but I know you’ve been able to get hockey skates customized this was for a few years 🇨🇦 #reinvent

Tweet 86/135 👇 Next tweet 👆 Start

👆 < he says ironically, as a non-hockey playing, barely skating Canadian > 🤣🤣🤣 #reinvent

Tweet 87/135 👇 Next tweet 👆 Start

the aim of this segment, seems to be to tie the use of AR/VR to help bridge the gap between current experiences in interaction and what would be way more effective #reinvent

Tweet 88/135 👇 Next tweet 👆 Start

@werner explaining the requirements to build a 3d asset from 2d images a/k/a photogrammetry #reinvent

Tweet 89/135 👇 Next tweet 👆 Start

related note: cool paper from @AmazonScience, “LipNeRF: What is the right feature space to lip-sync a NeRF” at https://www.amazon.science/publications/lipnerf-what-is-the-right-feature-space-to-lip-sync-a-nerf #reinvent

Tweet 90/135 👇 Next tweet 👆 Start

... #reinvent

Tweet 91/135 👇 Next tweet 👆 Start

good point from @werner about the 3d placement for some shopping apps. you don’t just need the model of the object, but you need to map the space too #reinvent

Tweet 92/135 👇 Next tweet 👆 Start

details on o3DE at https://www.o3de.org/ #reinvent

Tweet 93/135 👇 Next tweet 👆 Start

Tweet 94/135 👇 Next tweet 👆 Start

@werner talking about spatial intelligence, uses the “pack the boot for the vehicle” example. we’ve all felt that pain at the start of a roadtrip… #reinvent


Tweet 95/135 👇 Next tweet 👆 Start

talking now about the complexities of spatial intelligence. autonomous driving is the current use case #reinvent



Tweet 96/135 👇 Next tweet 👆 Start

“Visualize everything”, @werner << rightfully calls out that visualization isn’t for the systems, it’s for us, the ppl #reinvent

Tweet 97/135 👇 Next tweet 👆 Start

next customer speaker up. it’s Nathan Thomas from @EpicGames #reinvent


Tweet 98/135 👇 Next tweet 👆 Start

nice callback to the intro video with the Matrix Awakensexperience on the Unreal 5 engine watch more of that at https://www.youtube.com/watch?v=WU0gvPcc3jQ&feature=youtu.be #reinvent

Tweet 99/135 👇 Next tweet 👆 Start

@UnrealEngine metahuman creator is at https://www.unrealengine.com/en-US/metahuman #reinvent

Tweet 100/135 👇 Next tweet 👆 Start

all of that 👆 👆 is running on @awscloud #reinvent

Tweet 101/135 👇 Next tweet 👆 Start

more on @Twinmotion from @EpicGames at https://www.twinmotion.com/en-US …all running on @awscloud services #reinvent

Tweet 102/135 👇 Next tweet 👆 Start

Tweet 103/135 👇 Next tweet 👆 Start

it’s been a while since we’ve seen a customer example that leverages @awscloud EC2 so much don’t get me wrong, tons of customers are using EC2 constantly, it’s just not usually the “interesting” story that gets top billing like this #reinvent

Tweet 104/135 👇 Next tweet 👆 Start

btw, @EpicGames announced that RealityScan is available on iOS now #reinvent

Tweet 105/135 👇 Next tweet 👆 Start

lots of cool tooling from @epicgames & @UnrealEngine around building & integrating 3d works. tons to dive into and explore #reinvent



Tweet 106/135 👇 Next tweet 👆 Start

Tweet 107/135 👇 Next tweet 👆 Start

Tweet 108/135 👇 Next tweet 👆 Start

@werner calls out one of the critical patterns for @awscloud: Experiment, measure, learn #reinvent

Tweet 109/135 👇 Next tweet 👆 Start

it’s harder to experiment in some scenarios. in those cases, you need to simulate #reinvent

Tweet 110/135 👇 Next tweet 👆 Start

@werner going way back on this one... #reinvent


Tweet 111/135 👇 Next tweet 👆 Start

getting closer to now in the timeline… #reinvent

Tweet 112/135 👇 Next tweet 👆 Start

…and today simulation is run in a ton of different places #reinvent

Tweet 113/135 👇 Next tweet 👆 Start

reasons for simulation: - physical limitations - hazardous scenarios - manipulation of time #reinvent

Tweet 114/135 👇 Next tweet 👆 Start

👆 some reasons for simulation. these are the ones @Werner is focusing on today #reinvent

Tweet 115/135 👇 Next tweet 👆 Start

moving to spatial simulation now... #reinvent

Tweet 116/135 👇 Next tweet 👆 Start

more on @awscloud SimSpace Weaver at https://aws.amazon.com/simspaceweaver/ #reinvent

Tweet 117/135 👇 Next tweet 👆 Start

the episode of “Now Go Build” that @werner is referencing is up at https://www.youtube.com/watch?v=W_ningGJBGw&feature=youtu.be #reinvent

Tweet 118/135 👇 Next tweet 👆 Start

for the simulation example, it’s to answer the question, “What is we plant here?” the simulation allows for rapid experimentation, results are then used to optimize real-world efforts #reinvent


Tweet 119/135 👇 Next tweet 👆 Start

simulations also allow for solution prototyping. example here is if a fence is put in place to restrict animal movement to protect tree growth cool stuff! #reinvent


Tweet 120/135 👇 Next tweet 👆 Start

“Simulation plays a crucial role in innovation”, @werner #reinvent

Tweet 121/135 👇 Next tweet 👆 Start

Tweet 122/135 👇 Next tweet 👆 Start

... #reinvent

Tweet 123/135 👇 Next tweet 👆 Start

… #reinvent

Tweet 124/135 👇 Next tweet 👆 Start

@werner talking now about the limits of simulation 👆 #reinvent

Tweet 125/135 👇 Next tweet 👆 Start

a little thought experiment now…let’s look ahead 20 years #reinvent

Tweet 126/135 👇 Next tweet 👆 Start

if quantum computing pans out, that could increase the usefulness of simulation significantly #reinvent

Tweet 127/135 👇 Next tweet 👆 Start

current state has a long way to go. these orgs are working on problems in the space 👇 #reinvent

Tweet 128/135 👇 Next tweet 👆 Start

more on @awscloud Braket at https://aws.amazon.com/braket/ #reinvent

Tweet 129/135 👇 Next tweet 👆 Start

some quick math for the challenge of simulation in some spaces #reinvent

Tweet 130/135 👇 Next tweet 👆 Start

@werner politely declines to pronounce this on stage 🤣 #reinvent

Tweet 131/135 👇 Next tweet 👆 Start

quantum computing could enable simulation at scale. current computing can’t handle the sheer number 👆 of object required to be tracked & computed #reinvent

Tweet 132/135 👇 Next tweet 👆 Start

interview with Dr. John Preskill that @werner is talking about is up at https://www.youtube.com/watch?v=tcTF_ag_wWU&feature=youtu.be #reinvent

Tweet 133/135 👇 Next tweet 👆 Start

Martin Garrix is the headliner for @awscloud re:Play tonight! #reinvent

Tweet 134/135 👇 Next tweet 👆 Start

… #reinvent

Tweet 135/135 👇 Next tweet 👆 Start

@werner ends with a call to build bigger, better, bolder systems much faster! now go build. #reinvent /🧵

]]>
https://markn.ca/2022/werner-vogels-keynote-play-by-play/ 67f6bbeaaf961b0001443265 Thu, 01 Dec 2022 09:00:00 -0500