The Failure of the Most Desirable Security Control Ever
Today Twitter removed the legacy verified checkmark from the platform. Maybe. But also, maybe not?
Who really knows at this point, the specifics and the timeline will be whatever they will be given the current state of the platform. What is crystal clear is that the blue checkmark has now shed the last hints of its original use as a security control.
Are You, You?
The original verified checkmark was either the result of a lawsuit or coincidentally an intention feature launched at the same time. The goal was simple; provide a visual indicator that an account was in fact the person or organization it presented as.
At the time—and somehow once again—there was a flood of accounts pretending to be other people on the network.
This process went through a few iterations and had it’s share of challenges. Throughout it’s history, one thing remained constant, the process was a manual verification of the account.
Just Notable Enough
When I applied—yes, I was a legacy—I submitted a copy of my identification and a list of public references in various media publications.
I was denied the first time I applied. Not notable enough.
What did that mean? There was no clear answer. This was a problem throughout the program’s lifecycle. What was this mysterious “notable” bar?
A few weeks later, I re-applied with an expanded list of public references and was verified. At this point in my career I was regularly appearing on CBC television and radio as technology expert and being interviewed about cybersecurity issues by various media outlets around the world.
This work aligned directly with the goal—at that point—of the program. If someone saw me on TV or quoted in an article, they could have a reasonable assurance that tweets from @marknca were from me.
Part of the reason that assurance held up was the rules set out for the accounts with the blue checkmark.
Your profile picture had to be a clear picture of you. While your handle could be anything, your display name needed to be your actual name. You also had to link your phone number to your account—though it was not visible publicly.
You could change your profile picture, but it was supposed to always be a clear picture of you. Any changes to your display name could result in a loss of verification or a re-verification process…though this seemed to be rarely followed up on.
The rules were slightly different for organizations and brands. It wasn’t a perfect setup, but it did a reasonable job of reaching the security goal; helping users spot imposter accounts.
An Unscalable Process
The verification process has been rightly criticized throughout it’s 14 year lifecycle. The biggest complain was the “notable” criteria. It’s always been somewhat hand wavy.
Because an account had to reach some arbitrary level of notoriety, the verification process wasn’t accessible to every user. Scenarios where verification could prevent real harm weren’t addressed and users had to find help—if they could—through other abuse reporting mechanisms on the platform.
At the time and in retrospect, the program would always have this issues because of the manual aspects of verification.
If we do some napkin math, let’s say a full time employee can ‘verify’ 100 accounts per week (about 15 minutes an account). That one employee, only working on verification, will process about 3,800 accounts in a year.
That was 0.0001% of the platform in 2009. And 0.000008% of the platform today.
It was never going to cover everyone.
That failure to scale turned this critical security control into a status symbol. The blue checkmark shifted from being a way to ensure that this account was valid to a social status indicator.
“This account is important.” was the takeaway as verified accounts became a smaller and smaller percentage of the platform.
This was made even more pronounced when the program was closed to new applicants a number of times over the years. In the end, only about 420,000 accounts were verified through this process in over a decade. That’s around 0.0009% of the user base.
That tiny blue checkmark shows up on the user profile page and next to their display name on every single one of their tweets. When a public indicator is only available to a tiny fraction of users, it’s not surprising that it become a desirable badge for everyone on the platform.
Now the blue checkmark is part of the paid, Twitter Blue, subscription. The challenge here is one of mismatched perception. There are plenty of stories showing the impact of shifting the visual indicator away from a identify verification to payment verification.
What’s interesting is looking at this from a security point of view.
I can’t think of any other case where a security control has shifted it’s utility so completely.
The underlying expectation by users—at least in the first few months of the new Twitter Blue era—was that the blue checkmark meant an account wasn’t an impostor.
The reality was that the blue checkmark meant the account had paid a subscription fee…or it might not be an impostor.
This is the worst type of security situation. Users are left in the dark as to what an indicator actually means.
If you followed the news about Twitter and took the time to view the profile of the user in question, you could see that the blue checkmark was from the—now—legacy verification program.
If you didn’t take those steps and were simply operating under the same assumption you had been for the past 14 years, you could easily draw the wrong conclusions about the account. Believing that it was legitimate when it was in fact not.
Clarity Is Key
Security is rarely simple. It would be wonderful if decisions sorted neatly into “secure” and “not secure”, but the reality is that almost any decision can be the best security decision if it’s made with a solid understanding of the trade offs being made.
Security decisions are all about context.
The fundamental challenge with the blue checkmark on Twitter for the past few months—and most likely, for years to come—is that it is trying to balance two completely different context.
The first, a strong assurance that this is who you think it is.
The second, that someone is paying a monthly fee to display a little digital icon next to their name.
That a security control became so desirable is an interesting case study. The challenge is that as it shifted to become a status symbol, it’s become the worst type of control. One that no longer provides any tangible security benefits, but it still widely believed to do so.