Hacker News new | past | comments | ask | show | jobs | submit login
More than 1k people at Twitter had ability to aid hack of accounts (reuters.com)
382 points by 0xedb on July 23, 2020 | hide | past | favorite | 230 comments



accounts with more than 10,000 followers should at least need two people to change key settings

For accounts that could start a war this might be necessary, but for celebrities with >10K followers this sounds expensive and unnecessary to me.

To me, it seems like you could instead ensure the admin view of every account has a timestamped log of recent settings changes, including changes done by admins, with a link to the profile of the admin responsible, and a button to suspend that admin account with one click.

This way, the security team could've seen that Elon Musk's account had just been reset by J. Random Employee minutes before tweeting the suspicious bitcoin tweet, messaged J. on Slack to be like "hey did you do that?", and suspended the compromised admin account within minutes.

Sure, some accounts might be briefly compromised initially, but it would be resolved in minutes and not the hours that it took Twitter, right? That seems fine for what should be a relatively low-likelihood, high-expense attack like compromised admin account (of course, you have to ensure that is the case).


‘Two people’ misses the entire problem here.

If twitter ‘verified’ means anything, it means a chain of identity has been established between Twitter and the purported owner of that account. That chain should be documented somewhere - there must be some record in the ‘verified account management’ system that says something to the effect of ‘after we gave this actual verified human this token, this email from this address arrived on this date containing that token, establishing that this person had control over that email address on that date’.

If random twitter admins can change the email address and disable 2fa on verified twitter accounts, and those accounts can still publish tweets without them going into a ‘held pending verification that the blue check mark still applies to the person now in control of that account’ queue, then twitter verification doesn’t mean much.

Which might be of interest to organizations like the SEC who hold that communications over a verified twitter account count as official corporate notices, and various public safety organizations that have let it be known that messages on their twitter can be relied on as a source of official information during natural disasters...

Twitter needs to get serious about blue checks.


> If twitter ‘verified’ means anything, it means a chain of identity has been established between Twitter and the purported owner of that account.

Not really, a blue checkmark is just a status symbol.


This is exactly the problem with the blue tick. It's basically meaningless other than as a budge of honour. It's also restricted to large companies and 'public' figures.

What I'd like to see is, the Blue Tick being restored to be an actual mark of Verification, and be something that anyone can apply for with the appropriate identification documentation.

Additionally, there should then be a toggle switch, where only Verified accounts see tweets and replies from other Verified accounts[1]. This would effectively create two Twitters; one where every account is identifiable and accountable for what they tweet, and another that continues with the anarchic system they have now, where hate speech, racism and intolerance run rife[2].

---

[1] I've heard rumours that this toggle switch already exists on Verified accounts - can anyone confirm ?

[2] Yes, free speech may be trapped here too, unless some sort of middle ground can be worked out.


I think the Yonatan Zunger had the right solution to this: Verify Facts Not People.

Replace the checkmark with a Verified Badge that says their position if they're an elected official or major organization leader, or just Real Name if they've verified their ID.

He wrote this in response to the Jason Kessler kerfuffle, well before the "factcheckUK" stunt, but it would have actually solved that too! Imagine seeing the username "factcheckUK" with the Verified Badge "Conservative and Unionist Party, UK 🇬🇧".

https://medium.com/@yonatanzunger/the-hard-lessons-of-blue-c...


Sounds just like the 'real names' policy that Google and Facebook have tried before. That never made any difference to hate speech, racism and intolerance, so why do you think it will magically make Twitter better?


I can imagine at minimum this would help with bots. Considering the problems you state are actual systematic issues we have in our society I would expect that the verification process should not be perceived as working towards solving those.


I have no idea how comment bots run today, but for anyone only slightly invested, it would mean one additional hurdle (get accounts for actual people who are not using the service), but not a blocker (like captchas: they mostly serve to annoy regular users).


That seems like a pretty large hurdle, compared to today where it seems they're just creating as many accounts as they want. There isn't a single political tweet without a bunch of bot garbage as replies. I assume any name with 8 digits at the end is a bot and an auto-block (and these are all over political tweets), but there are so many more tweets that are highly suspicious once you go in and look at their feed.


It would also exclude venerable people which is the problem with the real name idea


We can imagine a system where twitter checks that person is a real unique human but does not use their personal data for anything else.


Like when they used mobile phone numbers given for security purposes for marketing?


Until the next hack links their real name with their handle? No thanks.


How to verify uniqueness?


*vulnerable, presumably


Because the world is in a different place now. As long as twitter doesn't make it mandatory, it should work. Those that want a bit of decency on Twitter can get verified and have the knowledge that the people they converse with are who they say they are, and those that do not can carry on using Twitter in the same way they always have.


[1] Sort of. It's a tab on the Notifications screen that only shows replies/likes/mentions from other Verified accounts.


Thought so. Thanks!


Do you really think blue check marks are ‘meaningless’? Are you thinking of them in the context of, among your peers who has blue checks and who doesn’t being somewhat arbitrary? Because for sure if you’re part of a professional community that is common - you’ll find academics and journalists and medical professionals and so on all have very random experiences with blue check marks, much like tech does.

But at the same time, in aggregate, blue check marks do provide some degree of legitimacy to accounts - yes, this account is that person you know from outside twitter.

But twitter don’t do a great job of explaining how they verified an account, or even who they verified it to be. As has recently gone viral, twitter gave @sistersofmercy a blue check mark, because they really are the catholic institute of that name - not the band (@tsomofficial doesn’t have a blue check mark ...)

I think there’s the basis of something interesting in ‘verified accounts’ - and for sure they’re flawed - but I don’t think ‘meaningless’ is correct.


What do you think of Yonatan Zunger's proposed fix, to verify facts and not people?

So @sistersofmercy might get a Verified Badge saying "Religious Institute, Ireland", and @tsomofficial might get a Verified Badge saying "Musical Group, UK".

He wrote this in response to the Jason Kessler kerfuffle, well before the "factcheckUK" stunt, but it would have actually solved that too! Imagine seeing the username "factcheckUK" with the Verified Badge "Conservative and Unionist Party, UK 🇬🇧".

https://medium.com/@yonatanzunger/the-hard-lessons-of-blue-c...


100% this is how to make it meaningful.


This problem has been solved for decades - cryptographic signatures. Twitter and their users are uninterested in a real solution, they just want engagement.


It gives validity and implies trust.


As twitter only has about 5k Employees having more than 1k ie over 20% with access like this is shocking and the fact that "j random "contractor" has access even more so.

Twitter needs to get serious period and not just blue checks.

Also a lot of the other FANG type companies are effectively CNI - I think they need to start properly vetting people and I mean real security clearance possibly including TS


> held pending verification that the blue check mark still applies to the person now in control of that account

That sounds to me like it's simply having a 2nd person verify the email change is correct. So your suggestion and the article's suggestion ("should at least need two people to change key settings") seem to be very similar if not the same as each other.


Process can be automated depending on how the account needs to be verified. Might be that say for a brand account it is connected to a domain ownership/verification model, where 1) email address must be in a particular domain, and 2) email must be able to complete a challenge/response process that demonstrates they control the company website - eg some random value is sent to the email address and they have to make it available via an https resource on the company domain.

For personal celebrity accounts maybe verification is just that the email address must send in a scan of a government photo Id, and sure maybe that triggers a manual check, but it’s not just a ‘two keys’ solution - it’s a verification process.

Sure, the same admin console might be able to go in and change the verification mode and rules on a verified account - but that’s something that would happen rarely, and a flurry of activity changing the verification process for a bunch of accounts would definitely merit a red flag.


I guess that's plausible. But on the other hand Twitter got plenty of red flags with this attack but still struggled to stop it. And is our goal detection or prevention?


Twitter's blue check is a growth hack, not a notary public.


De facto or de jure?


This is all very true, assuming that was the purpose of the blue check. But it isn't. It's a status symbol that people also see as adding legitimacy to the tweets of an account.


pending verification that the blue check mark still applies to the person now in control of that account

Why would you want to ever allow an admin to transfer control of a verified account to an unverified person?

If you're saying that the account recovery process needs to be at least as secure as the credential that was verified (e.g. email address), then I agree. But I don't think reversion to a "pending" state would ever be desirable, though.


PGP solved this issue 30 years ago. I can not believe we have this discussion in 2020


Except that the problem isn't the signature itself, it's the required infrastructure. Grandma doesn't know how to check The Donald's signature. And, of course, the infrastructure is hard (just check the unfixable problems with the PGP persistent DOS attacks that were discussed a year or 2 ago).


Twitter knows how to check a signature though.


I don't tweet much, but when I do I'd love to be able to sign them like I sign my git commits.


We did exactly this for a side project last week:

https://github.com/shabda/tweet-signer

At it's core it's just a spec, so feel free to write your own tools! Suggestions for improving the spec welcome. Please use Github issues


I once had to restore my Authy 2FAs from a backup, and didn't have access to the original device.

Restoring it took 24 hours, during which I got bombarded with text messages and emails warning me that someone was restoring my backup, and that if it wasn't me, I should immediately click or reply to prevent it from happening.

Seems like that might help - a 24 hour waiting period on any significant account changes for verified accounts.


This method is actually used by many countries, most of them in Africa, where thanks to MPESA and such the need of protection against SIM swapping is even higher, since your SIM is literally your bank account.

https://www.wired.com/story/sim-swap-fix-carriers-banks/

"The SIM Swap Fix That the US Isn't Using While foreign phone carriers are sharing data to stop SIM swap fraud, US carriers are dragging feet."


I actually had a similar idea for fighting SIM swaps—we should be able to ask telecoms "hey, when's the last time this phone number was moved to another device/changed IMEI numbers?" and distrust the number if it's been changed less than 48 hours ago.

I've looked but as far as I can tell, such an API does not exist, alas.


We (NL) do have such a thing where banks get notified of sim swaps or number transfers. They put the number on hold for 2FA pending authorization of that change with a different (Maybe second) means.


The problem is with POTS, you dont have that kind of capability in the protocol, even Caller ID cannot be verified. Most network will trust whatever is being sent. It is like SMTP it was designed in era where security was simply not there.


pgp solved many use-cases.


[citation required]

(It's true there are a bunch of important things that PGP has helped solve. Ubiquitous person-to-person secure communication and person-to-service cryptographic authentication are not amongst them. PGP is certainly usefully employed in some niche use cases, but it has failed at pretty much all it's original goals. I can't remember that last time I used it for anything except verifying a software download, and even _that_ use case only applies to a tiny fraction of places that hoist software downloads. My Arch linux installs running pacman and silently checking php signatures for me may be the only time I've had PGP code run in maybe a decade...)


Something like Require-Recipient-Valid-Since from SMTP? That would be neat. Does SMS have the necessary protocol flexibility to allow that to be added?


The User Data Header of SMS [0] isn't very flexible, and quite constrained - both it and the message needs to fit inside a 140 byte payload.

There are a handful of bytes reserved for a future purpose, which could be used for something like this, but you're limiting how large the message can be, likely significantly.

[0] https://en.wikipedia.org/wiki/User_Data_Header


For SMPP, there is the option to add further data using 'TLV' (Tag/Length/Value) parameters, not only UDH properties.


That is part of a service that we use, provided for some banks, but requires a lot of integration with the mobile networks and a lot of additional business logic around new sims, old sims used on new accounts, old sims used on old accounts when first set up, etc. Banks use it for determining whether it is deemed safe to send OTP or other sensitive messages to a mobile. If sim has been swapped recently, they may then choose not to use text message delivery to prevent potential sim-swap fraud.


> this sounds expensive and unnecessary to me

Should we be OK with what has become a significant communication platform being run with sub-par security because it's "expensive" to do it properly?


Personally I think we need to step back and work out why the fuck anyone is OK with Twitter "accounts that could start a war"? And yet here we are.


Luckily, after this incident, I believe it has gotten way less likely, that a tweet can start a war.


There may be a bit of an hyperbole in the expression "accounts that could start a war": there are indeed accounts of people who could start a war, yet I fail to imagine how a single tweet, or a few tweets, by some hacker could actually start a war. Escalate tensions, sure. But I assume world leaders and their advisors don't rely (solely) on tweets before calling the cavalry.


Sure, how about we dial the hyperbole down a bit, to "accounts universally known to be a primary mechanisms for announcement of international policy by the leader of a country which has started 12 'armed conflicts' in the last 20 years (or 14 if you count them doing it twice in Iraq and Lybia)"?

I find it quite horrifying that elected officials are legally allowed to use totally unaccountable social media platforms to communicate policy to the public.


>I find it quite horrifying that elected officials are legally allowed to use totally unaccountable social media platforms to communicate policy to the public.

It also creates a number of issues. If twitter decides to ban me, doesn't that impact my right to contact my representatives via twitter (especially since a judge has already ruled that a government official blocking a person on twitter does violate their right). Seems that the government should only be allowed to use a platform for communication if that platform is treated as a public square that all can access regardless of past history, same as public squares of the past. This isn't putting a limit on Twitter, they are a private company and can do what they want. This is putting a limit on the government. Now, if Twitter helps the government establish such an account, then they would be a private company choosing to open itself up as a public square and, specifically with regards to the parts that are a public square, losing some of the rights of a private company (they can't ban you from the public square, but they can ban you from anywhere else as the rest of the site isn't part of the public square).


Mostly, I find it horrifying that we would elect an official whose grasp of diplomacy is so poor as to continually use completely unfiltered channels, with little grasp of the effect of such communications.

The ultimate check on the behavior of elected officials is supposed to be the voters. You shouldn't have to have laws to prevent them from making bad choices. In this case, the voters like their speech to be "tell it like it is", which mostly means hating the same people they hate. They're getting what they asked for, and are likely to ask for it again.


What are our options here? Some *.gov website that public isn't going to read?


Yes. It might still be mirrored on Twitter, but at least that would not put it as authoritative source and the people at the trigger would've a page too look beforehand.


That's on the public, ain't it?

Next thing, some will demand that all communications with the government happen over twitter too (or another private entity), thus forcing people to have accounts on them.


There are already organisations that have to control employee access to ‘customer’ data very tightly. Law enforcement. Law enforcement agencies have access to large databases full of people along with a huge amount of very sensitive data (both confidential personal data, and stuff like information about ongoing and typically covert investigations).

I’ve worked with several of these types of organisations and the ones that actually want to manage that access well, typically do a pretty good job (though some of them actually want to do it badly, or just don’t care).

Locking down access to sensitive admin consoles isn’t tremendously difficult, requiring additional approval workflows for highly sensitive operations isn’t particularly difficult, and in that context non-repudiation is rather simply to address.


> There are already organisations that have to control employee access to ‘customer’ data very tightly.

How about... anybody who has customers in the EU?


The EU doesn’t provide any standards at all relating to information security. It only specifies that security controls must be ‘appropriate’, but no definition or precedent for what that means. Customer service and community moderation staff accessing customer data, or having administrative control over their accounts would certainly not be a violation of EU law.


Parent was probably referring to GDPR, which (IIRC) mandates that employees only have access to the information strictly necessary for their position. You doctor's secretary should only have access to your appointment schedule and phone number, not your medical condition.


Actually not in the UK receptionists triage patients as a extreme covid risk (trasnpaltee) I get priority


Well yes, being able to give priority to at-risk patients would fall under necessary use. I doubt the receptionist has your actual medical history, but they would certainly see an indicator of your risk category.


There is no such thing as “necessary use” and the GDPR does not specify that an organisation must restrict employee access to personal data to only those whose access is “strictly necessary” (the cookie law contains that phrase, but in a completely different context).

The only thing the GDPR says that would apply in this circumstance is this:

> processed in a manner that ensures appropriate security of the personal data

As I stated above, the EU provides exactly 0 guidance on what “appropriate security” is, and no form of standard at all that it expects you to comply with. To make matters more confusing, an organisation is allowed to take their own budget into account, against the cost of security controls, when deciding what is “appropriate”.

You could ask the question, was twitter appropriately secure? You might come to the conclusion that they weren’t, because they were breached and any system that is breached must not be appropriately secure. That wouldn’t be an unreasonable conclusion, and as far as anybody knows that could very well be the standard that any data protection authority may decide to uphold at any time of their choosing. But then that would lead you to consider that there is no such thing as a system that can not be breached, so in that case there would be no such thing as a GDPR compliant service.


GDPR Article 6 spends a lot of time defining ‘necessary’ use. It says that ‘processing data’ - which is defined very broadly and includes accessing it - is only legal if it is for a ‘necessary purpose’ - either necessary to accomplish contracted work for a customer, comply with the law, or some few other permitted categories.

Combined with, as you say, that GDPR also states as a matter of principle data must be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing” - I would take that as meaning you can’t just leave data openly accessible to people who don’t need to process it and rely on them not accessing data they don’t need, you are expected to protect the data against that risk. I.e. secure access to data so it can only be processed for necessary purposes.

It is possible for small organizations that ‘telling Janet she isn’t allowed to look in the customer accounts spreadsheet’ is an adequate control but as organizations get bigger obviously the expectation that technical controls should be in place expands.


Necessary in the context of article 6 refers exclusively to processing that can be performed without consent from the subject. For example:

> processing is necessary for compliance with a legal obligation to which the controller is subject


GDPR really doesn't cover this

I know that people with access to some telecom systems in the UK have to have been vetted some even to DV level - ie the same as if you where working for a TLA.

So having to pass a TS clearance and the whole SF86 form for FANG employees is a possibility "so Elon about your pot smoking habits"


I think the long tail was in undoing the actions made by the attackers. Resetting passwords, emails, etc.,.


No, according to The Block, @elonmusk repeatedly tweeted the scam at 4:17pm, 5:19pm, and 5:32pm, a span of 90 minutes, and the final scam tweet was at 6:05pm from @KimKardashian.

An hour after @elonmusk's first scam tweet, 7 celebrity or corporate accounts had tweeted the scam, all with the same Bitcoin address. With the two-click system I described, how many compromised admin accounts would you expect the security team to have been able to suspend by then?

8 more celebrity accounts went on to tweet the scam, plus @elonmusk and @kanyewest repeating the scam tweets.

https://www.theblockcrypto.com/post/71906/twitter-account-ha...


If your database system doesn't have a complete audit log of all fields (most databases have this capability, but more often than not it's disabled), it's possible that the mere act of reverting account ownership might remove data needed for tracing down what happened.

Sure, it's a sucky position to be in, but I can see why they might have been hesitant to dive right in and start trying to undo damage before understanding what had happened.


> I can see why they might have been hesitant to dive right in and start trying

I mean, after all - it was only the cattle.

It's not as if the attackers got into the accounts of customers, the paying advertisers.

(Besides, the Part Time CEO was probably in Africa and unavailable to provide decisive direction, right?)


Which databases? It's definitely not standard in PostgreSQL or MySQL/MariaDB


Replication logs (WAL logs in postgres) contain a complete list of changes to every field. Most big companies keep them as part of a backup strategy. But most wouldn't have the tooling to inspect the logs and see exactly which change was made when during an incident.


How about we don’t start wars based off a twitter feed?


In an ideal world, world leaders would all have restrained enough Twitter habits such that anything that inflammatory would be seen as an obvious signal that their account was compromised.


It is not about bad habits, it's a systemic problem.

World leaders are also politicians that need to maintain their internal position (even dictators need to at least prevent a coup). So when they spew inflammatory rubbish it's often to appear tough to their internal audience, often at the cost of the interest of their own country.

This ideal world has to figure out how to deal with this conflict of interest.


Where is this "ideal world" and how do I get there?


That just won’t work. Just target the attack in the middle of the night or lunch hour.

Furthermore, the next attack will probably be automated and be against far, far more accounts.


Yeah, response time might be slower in the middle of the night, but a falsified tweet on a celebrity account in the middle of the night is also likely proportionally less damaging.

Response time during lunch hour might be slower initially, but after responding to the first compromised account I don't think they'd be any slower.

If an admin account is only supposed to be for use by a human employee, it should have a rate-limit tripwire that automatically suspends the account and alerts the security team.


> ... a falsified tweet on a celebrity account ...

A Hollywood celebrity? A bay area techbro "celebrity"? Or a Bollywood celebrity? Or a British Royal family celebrity? Or a KPop celebrity? Or a Russian oligarch celebrity?

The middle of who's night??? Twitter does exist on the other side of the Bay Bridge you know...


All of those sound like fairly harmless Twitter accounts to lose control of for a few minutes once in a decade?


Twitter is a big company - they’ll have someone available 24/7


> Just target the attack in the middle of the night

Define 'middle of the night'. Is that Eastern US, Western US, GMT, or CET?

If I assume it's 2 AM pacific (since Twitter HQ is in SF), that's 11 AM in most of Europe, which makes it middle of the day for about 700 million Europeans, many of whom speak English and are interested in US celebrities. And that's still ignoring the majority of the World.

Anything on the internet is 24/7. Musk regularly tweets in the 'middle of the night' and I see those tweets come in while drinking a cup of coffee.

Also, Twitter has offices around the world and people don't collectively log out for lunch at a specific time, so there should always be people available to handle this type of incident. Provided they get the training and tools to do so.


How would a Twitter account start a war?


I don't know if it's possible.

But people who do think it's possible probably think it would happen something like this: https://twitter.com/realdonaldtrump/status/12139194805748121...


Also have a Slack channel for automated bot notes of high-value targets having their passwords reset, or something similar.


For comparison, at Google in 2011, I was one of ~10 or so engineers that had the ability to view private Gmail or Gplus data (access that was heavily documented and audited).

That being said, Google did have to go through it's own public humiliation [1] to put a system like that in place.

https://gawker.com/5637234/gcreep-google-engineer-stalked-te...


I almost wonder if government officials should be outright banned from using any private messaging platform that isn't hosted by the government itself.

There is just too much power in information.


I believe to an extent they are: Hilary Clinton's hacked email was not the govt provided one, and most of the flack she received was for using personal email for gov stuff at all.


Don't all engineers working on Gmail theoretically have the same access by conspiring with a code reviewer or two?

It ultimately comes down to the person involved and I do not believe anyone can control the human factor.


They can easily build and view their own versions of the gmail stack, but they would not be able to generate auth tokens to decode the private data of accounts they did not have passwords for.


I was more thinking of deploying trojan-code into the production service (as a trivial example, allow a special password to access any account): it can't be practical to vet every production service change through too many people.

You seem to suggest that you are using an encryption key based on the password or oauth token on login, which is great to hear, which stops the simpler forms of trojans like the example above. That makes it much more involved to achieve the same (and login from new computer reports make it harder too), especially undetected (because it has to happen over a short period) but not impossible (thinking of cases like just making a new API endpoint or perusing an existing one to store actual content in an often unlooked at log file/service).


Kind of sensationalist. There's thousands of people that have the ability to drain your bank account right now. Your average call center employee wields immense power. The real story here is Twitter's lack of spear-phishing training for their support staff, not support employees have access to support tools.


It's not sensationalist when you realize it directly contradicts Twitter's prior statements from just last year about it:

> Twitter, in a statement, said it is aware that "bad actors" will try to undermine its service and that the company "limits access to sensitive account information to a limited group of trained and vetted employees."

https://www.npr.org/2019/11/06/777098293/2-former-twitter-em...

1,000 people, including contractors outside the company, is not a "limited group of trained and vetted employees." It's news because they misled people about their security, again.


I don't think that's a contradiction, I think you and Twitter have different understandings of what the size of a "limited group of employees" is. The usual advice of dismissing nebulous corporate statements like that applies.

Anyway, even if they had provided a figure, I think you're taking it out of context - the quote says access to "sensitive account information" is limited, not access to account recovery options. So it's potentially someone outside of that limited group whose credentials were compromised.


<cynical view> I think the misunderstanding is over the phrase "sensitive account information".

I notice it wasn't Nestle or Verizon or Disney or Heinz or Unilever accounts that got hacked.

You know, the information about "accounts". The records of monetary transactions.

https://www.statista.com/statistics/1094351/us-twitter-adver...


> 1,000 people, including contractors outside the company, is not a "limited group of trained and vetted employees."

That's not necessarily true. 20% of the company could fairly reasonably be deemed "limited", and there being a thousand of them doesn't mean they're not trained on their tasks.


Everyone having access but James the Janitor is technically limited access too.


We'll have to agree to disagree on what we consider fairly reasonable to call "limited".


Today I learned that Twitter has 4,600 employees. What are they all doing?


Ya cos Twitter is just a CRUD app /s

Once you want to add more people to any business, you need to add even more people to that business.

Lets say you have 10 engineers and want to add another. Suddenly HR's workload has tipped over the limit and you need more HR people. Now communication is fracturing for those 10 engineers and you need a Product Manager and an Engineering Manager to centralise the steering and cohesion of those 11 engineers. Now budgets, payroll and accounting has increased and you need another Finance person.

Suddenly your office is too small so you need a bigger office and an office manager.

This is obviously a contrived example, but having worked at very early stage startup, a mid sized startup and a global megacorp while seeing all of them go through various growth cycles you start to appreciate how headcount can creep in ways which feel indirect to the most pressing problem at hand (ship more features, deliver more customer value).

You see it software terms too - as your software project scales suddenly you need more infra, your CI environment gets more complex. Suddenly your workflows don't scale as too many engineers are working on the same code, so you rearchitect (components/microservices) then you need to start building dev tooling and metrics/observability....

I guess as some kind of system scales, the leverage you gain from adding a thing to it has some diminishing curve / inverse relationship to the size of the system.


They have 35 offices, I assume it adds up.


Oh, right. That makes sense.


Support, operations, legal, purchasing, finance, marketing, development, and management.


Running one of the world's most important communication platforms.


1000 people absolutely is limited - it's not everyone at the company! I'm sure they did their annual security training, like everyone here, and they might have even passed an additional screening and another training session for account access.

You have unreasonable expectations for what "limited group of trained and vetted employees" means in a CSR environment for millions of customers.


> 1,000 people, including contractors outside the company, is not a "limited group of trained and vetted employees." It's news because they misled people about their security, again.

I don't think this is misleading at all. Your bank probably has thousands of people who can get equivalent access to your account, and they serve a lot less people than Twitter, and mostly during business hours, in one language, in one country.

1000 people in total when they have to have some available 24/7 isn't many.

Say 200 of them are individual technical staff with access for specific debugging purposes. Then it's only 200 people per 8 hour shift.

There's probably requirements for specific language support too, which increases the head count. There was a period of time some years ago when Twitter's peak usage was from Japan, in Japanese hours, in Japanese language.


Banks make the same promise that it is a "limited group of trained and vetted employees."

In an organization as large as Twitter or a bank, that still means thousands of people. You are seeing 1000 as large because you are forgetting the scale on which they operate.


It can be every employee except one and still be called "limited group of trained and vetted employees". It's written that way so that they covered their ass, it's not a statement of security.


It's funny how most people think that 1000 out of 4600 employees having admin access is "not misleading" and counts as a "limited" group. It shows how in the public mind, technology groups should not be held accountable for their actions.


I take it that 1000 out of 4600 employees shows that, not unsurprisingly, a lot of the staff at this technology company may be involved in hands-on activities against live services. Maybe DevOps style.


Why is that not surprising? Maybe we are used to different types of technology companies though.


I'm sure they took a web training before being handed the keys to the kingdom.


> The real story here is Twitter's lack of spear-phishing training for their support staff, not support employees have access to support tools.

Spear-phishing by its very definition is a highly targeted attack. I wouldn't count on any level of training to prevent someone from getting phished. Given some of the spear phishing campaigns I've seen, I wouldn't trust even myself not to fall for them.

It's a problem that needs to be solved with technical solutions like hardware U2F, locked-down customer support devices (e.g. Chrome enterprise policy managed ChromeOS devices), and special account VIP/anomaly locking and auto-escalation.


FWIW, this is actually quantifiable. We contract with a firm that tests employees' response to spear phishing about once a quarter with varying degrees of "difficulty". Part of an overall scheme that also identifies people who blindly click on things for, uh, further email education.


I'd love to know if you have any data to show if that "further email education" makes any difference in future behaviour...

The cynic in me reckons "Hell no! Those sorts of people are way too often _proud_ of their zero-thought blind clicking and lack of understanding of how things work"...


It's very easy to avoid being spear phished: do not trust any unsolicited message over any medium. Email/text/phone message/popup window purporting to be from your registrar with an urgent call to action? Ignore said call and contact them directly via known good number, email address, URL, etc.

EDIT: Voice mimicry scam? Verify via known channel before taking action.


The question isn't how you and I can individually avoid being spear phished, but what policies can be implemented across an organization to prevent it. Even the most trusted security teams aren't going to be allowed to summarily fire everyone who fails the test.

I also think this is a much stricter standard than you're recognizing. In my company's last spearphishing test, they sent out a link purporting to be a company survey immediately after an all-hands meeting announcing there'd be a survey (the real survey link came a few hours later). Expecting that nobody will be distracted enough to fall for such a thing seems unrealistic no matter how well you train them.


Just wondering if employees failed the test just by clicking on the link or if they had to actually enter some passwords or confidential information on the fake survey site. I wouldn't think clicking a link then looking at the address bar and seeing the domain name is wrong, then closing the page would be a problem, would it?


We got judged on both. Most security teams in my experience feel that even clicking on the link is a big risk, although I've never read a more detailed explanation of why than "oh there might be a 0-day".


I've seen that. It was funny.

The corporate security team sent out the email. It had a link with no actual content, giving an error, but that got you on the list of people with bad security behavior.

The trouble at my office was that most employees were highly capable security researchers. These are people who reverse engineer malware for pay and for fun. Of course they eagerly attempted to download from the link! They wanted fresh new malware. People would typically download via wget in a virtual machine on a PC without important data.


Jeez, what's next, seeing how the NOC handles a (fake) hostage situation?


Disable links in emails by default goes a long way.


How would this work? I get emails like "you have been added to <link> gerrit review" and "<link> Redmine issue was updated" several time a day and I need to open these links.


The links become text and you would copy and paste. That's how thunderbird does it.


Maybe on individual level but on organisation level it's one of the biggest current attack vectors.

And there is no magic bullet. Trying to educate people gives some results, but mostly just prevents low effort phishing attack.

I have never seen pentest that include social engineering fail. (This might be just our customers. I would expect govt or infrastructure organization to be better)


That's exactly it. If it is inbound you can't trust it.


It’s only going to get worse. Imagine your boss calling you and telling you to provide him some info: https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos...


Do they? There are usually levels of access and the bigger the change the more approval you need. Some things can only be done by an entirely separate team with manager sign off and other oversight.

It seems like Twitter has none of this, and while you can argue it's not the same a banking, there are still sensitive communications and there's no real reason why anyone should be able to post new tweets or access private messages without several approvals.


> and there's no real reason why anyone should be able to post new tweets or access private messages without several approvals.

Unless new information has emerged recently, this wasn't the attack vector. The attack was resetting account emails/passwords and turning off 2FA.

I agree that there should have been more protections around this, but it's hardly newsworthy that Twitter employs a large support team to support their large userbase - my main gripe is with how the headline is framed.


There's not much detail but how would they gain access from a password reset if they didn't have access to the email account? And if they had email access then they already have everything.

The reset via admin tools must have bypassed the normal email workflow.


Admin tools used to change account email address to one attacker controlled, then password reset requested which now sends to the attacker controlled email address


Then that's the vulnerable bypass.

Changing the email is effectively changing the identity attached. It's akin to an account recovery and should require several verification steps before it can be done.


i dont think spear-phishing training is the solution here; especially given the possibility of collaborating insiders. Short social media usernames are bizarrely coveted and so there's an entire cottage industry based on illicitly commandeering accounts and these operations have used insiders working for cellular telephone companies who are paid to grease along processes like unauthorized SIM-swaps/porting.

For operations on accounts where illicit access can cause massive irreversible damage -- either by exfiltrating private information (emails, DMs/PMs, posts on locked accounts, etc) or by making a post that appears authorized (the more notable the victim the worse it gets) -- there has got to be some sort of two-man rule (https://en.wikipedia.org/wiki/Two-man_rule) integrated into the system that can't be bypassed by the people with authority to make changes to accounts. Otherwise any insider / careless spear-phishing victim will make the changes they want, and theres no reason the adversary will limit themselves to posting shoddily-executed (they used the same address instead of generating one per victim!) bitcoin scams.

Furthermore, i'd really like there a way for any user (not just bluechecks) to opt-in to some sort of feature where Twitter enforces more stringent requirements/documentation/delays for the email/phone-change / password-reset processes -- at the cost of accepting higher delays or maybe even monetary payment.

There's no reason i need such critical account procedures to happen on twitter (or my email accounts, for that matter) to happen in real-time, and i would happily give that up in order to require that such a procedure only happen after, like, a week of enforced, non-bypassable delay where they contact me with details of the change on all my phone-numbers and emails every day.


Yes but it's missing the fact that you can't actually drain an account even with access. The overall network is designed for rollbacks, but more importantly, they're internally insured. My money will be put back if someone inside steals it (outside is completely different).

With twitter, the damage is not practically reversible since it was a bitcoin scam.

The analogy/comparison works to a degree, but misses some key differences as to why this is actually worse with twitter than a bank.


CS tools should require some 2FA based on personal information. CS agents should not be able to access arbitrary data.


I worked at a bank as an engineer and while some people did have _read_ access on certain databases, anything PII or otherwise highly sensitive would require special approvals and such approval would only be granted in 1 hour intervals. Write access generally required submitting the specific update query for approvals and was highly discouraged.


Education programs only move the needle, not something you do instead of minimizing the number of attack vectors.

For example, education programs do squat against me attacking the employees directly (targeted malware, getting on their computer somehow, offering each of the thousands of employees $10,000 for temp access to their account). And each additional employee only strengthens my attack.


> There's thousands of people that have the ability to drain your bank account right now.

Let them do it... the bank will always be responsible and it will always be solved without much issue.

Twitter accounts though... good luck taking back theses bitcoins from all the one that got scammed. Good luck even getting back your account if you aren't named Bill Gates.


There's a difference though. The money you lost can be returned in the case of theft, which happens a lot. But Twitter cannot undo what happened here.

Also I don't think thousands of call center people typically have sole ability to directly overwrite bank and brokerage key data fields.


> There's thousands of people that have the ability to drain your bank account right now.

That's false equivalence. If a bank employee drains my account without authorization, it won't be difficult to prove and get back. But once your data leaks, it's out there.


But a bank employee can leak data as well. Some would say more important data than some tweets


"There's thousands of people that have the ability to drain your bank account right now"

Do you have some data to back that up?

Sounds implausible


Certainly it will depend on who you bank with, but JPMorganChase has 250,000 employees[1]. If even 1% of them are customer service representatives, bank tellers, or in other positions with direct access to your account (which I hope we can agree is an underestimate), that's 2,500 people right there.

[1]: https://www.google.com/search?q=chase+employees


And they all have access to all accounts? I would imagine only a bank teller from the appropriate branch would have access to this branch's accounts.


Maybe back in the 1960s it was like that. Doesn't matter which branch I open an account at. I can walk into any branch and the teller should be able to pull up my info.


Does it matter? There are also millions of people who could murder you or burn down your house or kidnap you and force you to withdraw your money from the bank. But just like a bank worker stealing money, these are all serious crimes and easy to get caught so we're pretty safe.


I remember during my time with a large mobile carrier in UK I was told of a person in the company who could in theory read any SMS on the network. Mind you this was literally one person for over 30 million customers. He had a high security clearance, extensive security training and the powers vested in him were used mainly to identify scammers and other criminals.

Pretty sure this was a requirement set by law - we need someone to be able to do this, but lets make sure they know what they're doing. It is very weird we dont place the same requirements on social networks.


Social networks were never supposed to be important or serious in the same way as phone networks. I would argue they still aren't. At the bottom, they are just time waster websites. You wouldn't demand that level of security of a php forum would you?


At a certain threshold yes I would, if it served millions of people. A small ISP can get away with terrible security but once they start having millions of customers someone is going to sound an alarm. A forum, written in any language, should be no different. I realise there are challenges in making this happen but they are not unrealistic.


> Social networks were never supposed to be important or serious in the same way as phone networks.

I would think Zuckerberg intended Facebook to be important and serious. You don’t make a billion dollars off of something that’s trivial and unimportant.... The users might view it that way, but that lack of appreciation is just what enables you to make billions off of them.


You’re right about the first part, but large global social networks are quite close to phone networks in importance now.

Though cases like the recent bipolar tweets from Kanye West on Twitter does seem to support your point.

You could have said Wordpress forums, why take it out on PHP, though I get the message ;)


Except that huge public figures tweeting can actually affect real life a lot more than a bunch of SMS messages to yer nan....


Social networs including Twitter host tax-payer supported institutions such as USGS and NASA where they post updates.


This level of access horrifies me and in more than one way.

We should be limiting what people can do, and not giving them the keys to the kingdom.


I now understand why the bank I work for creates the separation of duties; the person who builds the system has no access to it, and the person with access has no idea how it works.

As a developer, it frustrates the shit of out me because I can’t deploy fixes quickly or easily diagnose issues.

But yep, there are 3 people that have access to the production databases that hold account info and they aren’t developers, just managers with no clue what to do once they log in.

I also worked for a company that sold software to lawyers. We had a feature that would alert the client any time a member of our company accessed their data. I think we called the feature something like “fire call”, because if you tripped it without informing the client, you’d get a call informing you that you’d been fired.


> there are 3 people that have access to the production databases that hold account info and they aren’t developers, just managers with no clue what to do once they log in.

Just for my curiosity is this your observation or is this a company assumption?


Hmm I think it’s just our group, we have a Production support team that holds the keys, and there’s only 3 of them that can access my app.

For example, if I want to change an environment variable, I can’t just log into the cloud console or run a cli command. God no. That would be too easy. I have to write a script for this team to run. This script is entered into an authorization app where a few parties “sign off”, at which point the prod support team can log in to the authorization app and click Deploy. This app then runs my deployment script against our app container to update the env variable.

Accessing and doing DB work follows a similar process.


Reading this makes me happy! Always good to see people taking security seriously.


Yep, this is how strict change control needs to work - if it can be streamlined, all well and good, but not by removing the checks and balances that can help prevent operational issues (not just fraud issues)


It's actually fine to have multiple people get access to production databases, if you have an automated change management system.

Change Management authorize sactions which could be problematic by vetting them through a process. The end result of the process is more confidence in the change: we know who's making it, when, where, why, what the impact will be, what we'll do if it goes wrong, etc. If the change looks stupid, dangerous, or uncertain, it gets rejected. If it looks good, it gets approved. The appropriate access is doled out temporarily to make the change. In this way, Larry the Janitor can deploy changes to your prod db, and you can be just as certain (maybe more so?) about the outcome than if your 10x developer were doing it.

Just one example of this is Terraform applies. By loading your changes into a .plan file and requiring approval of the .plan, you know for certain what actions are going to be taken and approve only those. (That doesn't stop Terraform from totally hosing your system due to all the ways it can't predict the outcome of its actions, but at least you know what the intention was)


Ha!

Did you see in that article that the head of cyber security for AT&T added his two cents in shaming Twitter?

AT&T was just in the news recently where employees were accepting bribes that allowed criminals to swap SIMs steal bitcoins from AT&T customers.

Unbelievable.


I have a lot of sympathy for the telcos on this.

They did not volunteer telephone numbers as universal proof of ID. So their threat model was proportional to their intended purpose of the identifier. If bad guys steal your phone number and run up $100 of calls, the phone company would eat the charges and get the number back. Of course nobody did that because it wasn't worth it.

Imagine you own a medium-sized residential building, maybe 20 households live there. You issue them all with front door keys. You use pretty good locks, from a no-duplicate series that isn't trivially picked by amateurs. You figure that picking the door or forging a key would be pretty hard so there's no way it's worth it when someone could just kick it down.

Then, to your astonishment, a local jewellery store announces that anyone who has one of your front door keys can now store up to $1M of valuables in safes they've made accessible from the street. "It's totally safe" they assure your residents, "because how could anybody else get one of these front door keys? That'd be impossible".

Um. What? Before you know what's happening, one of your residents is trying to sue you for a million dollars because they proudly used a safe to store their $1M of Bitcoins for some crazy reason and (duh) somebody just got a duplicate key easily enough for way less than $1M and stole them.


They not only "did not volunteer" to be a secure identity provider via SMS, they've been actively warning against it for almost a decade:

https://www.itnews.com.au/news/telcos-declare-sms-unsafe-for...

Telcos declare SMS 'unsafe' for bank transactions

By Brett Winterford Nov 9 2012

Communications Alliance chief executive John Stanton, representing the interests of mobile providers Telstra, Optus and Vodafone, took the extraordinary step of of declaring the technology insecure in the wake of numerous reports of Australians being defrauded via a phone porting scam first uncovered in Secure Computing magazine.

"SMS is not designed to be a secure communications channel and should not be used by banks for electronic funds transfer authentication," Stanton told iTnews this week.


I cannot find fault in that analogy.

Maybe only add that the whole city starts to run on your keys and you have been seeing this for years already.


Not just bitcoins, SIM swapping is how @jack twitter account got comprised as well.


Thousands of employees, each with their own financial problems and dreams...you're bound to find a taker. Money moves mountains


Which is why once your company is big enough, you should need 2 employees who are unfamiliar to each other to sign off on high value operations.


Bingo. Corrupting two is much less likely.


The key trick isn't so much the two as that they're randomly selected.

I moved a large amount of money a few years back to buy my home (I do not like debt, so I saved up until I could afford somewhere to live, then I bought it)

The bank's web site lets you type in any amount of money but then it says politely that you can't do this from the web site, please call the bank.

I called the bank (they always pick up in 2-3 rings, I've worked with one of their founders, ensuring this was one of the key ideas behind the bank) and explained what I wanted to do. The nice lady took down all the details and then she explained that now one of her colleagues would be randomly selected to call me back and confirm everything and we hung up.

Sure enough, less than a minute later another of the people from the bank called (with the agreed password for when the bank calls me) and had me read out all the transaction details again, at which point the transaction was confirmed.

Think about that scenario as a bad guy trying to corrupt it. You bribe one employee to pretend someone called and authorised a huge transfer. OK. But then a different random employee has to confirm it. How do you bribe them? You have no way to know who it will be! Do you try just bribing every single employee who works the phones? Not very practical.

The other thing banks do is they background check employees. You can't test for "willing to take bribes" but you can weed out potential hires with previous convictions for financial crime, or debt problems. I've had checks like that for jobs touching sensitive personal information.


> You bribe one employee to pretend someone called and authorised a huge transfer. OK. But then a different random employee has to confirm it. How do you bribe them?

So first bank employee I bribe is one who can update the phone number on your account to one I control (or even better, an employee from your telco who'll let me port your number to my burner phone), the next employee I bribe is the one who pretends the call came in. The different random employee then just does their job confirming the transaction with a call to me. Bingo - I have your house payment...


In your story the bank trusted a phone call more than you being logged in the website? How did they authenticate you over the phone?


After I identify myself I have to give them letters from a telephone password and a series of arbitrary questions I selected like "Memorable date" to answer. The very large transfer was years before I received a physical two factor authenticator, it's possible that these days I'd need to prove I had the authenticator too, I don't know.


The first call is authenticated as being the bank by calling the correct phone number the bank has on the website.

The second call is authorized via a password given on the first phone call:

> (with the agreed password for when the bank calls me)

Now the stringency of the verification of the caller being OP is unknown.


This is why internal tools that can modify account settings and such need to have audit trails.


It probably does, and it probably wouldn't have stopped this.


[flagged]


Can we do without the condescending "Uh" and "Um" on HN?

An audit trail would tell you who was social-engineered, but it wouldn't have prevented the attack in the same way Wikipedia's revision history doesn't keep you from vandalizing it.


It wouldn't even necessarily tell you who was social-engineered. My understanding of this case was that CS tool credentials were posted globally in their Slack.


In fairness, i don't think its a real audit trail if the tool doesn't have an effective per-user password.


Auditing tells you what happened, it doesn't prevent it from happening.

If they have logs then they can use it in the future (and it seems they do) to design better protections but only active alarms and security controls can prevent something happening in real-time.

However that does raise the question of why Twitter ever needs such access to someone's account in the first place, especially without a combination of approvals to get that access.


"I forgot my password, please reset it for me."


How does auditing itself prevent a present or future attack? Auditing and what you fix during audits are reactive.


100%. My work has really great auditing tools. I use them often to understand actions by other that are routine. It still doesn't prevent a employee emailing a datacenter to rack a malicious device or give someone service without paying. Record trails are not auditing. They are records.

Auditing, post mortems, whatever diagnose the situation afterwards.

At the end of the day Uber can't stop a driver from kidnapping people, but it can provide documentation and gps coordinates to police.

My point is companies need reasonable records and audit policies and when _really bad stuff happens_ you call in the big guns for the arm of the law.

At some point you also need to trust staff and weigh that against mistakes and malicious intent.

In short, security remains an imperfect balance of practicality


It's like saying a boat's wake slows down the boat. Sometimes you've just got to wonder what people are thinking


And in today's "Pompous Commenter That Didn't Read the Article News":

>But while logging helps with investigations, only alarms or constant reviews can turn logs into something that can prevent breaches.


I would be really surprised if they did not have audit trails. What gives you the impression they did not? The suspicion is that the credentials were stolen via social engineering. I wonder if employees needed 2FA to log in to these tools.


> The attackers successfully manipulated a small number of employees and used their credentials to access Twitter’s internal systems, including getting through our two-factor protections.

https://blog.twitter.com/en_us/topics/company/2020/an-update...


I created a Twitter account close to a month ago and it was immediately suspended because it "appears to have exhibited automated behavior that violates the Twitter Rules". Well it did not really do anything yet, even less so anything against their rules. The account is still suspended despite multiple appeals and messages.

At the same time, dozens (hundreds?) of verified accounts get taken over. I think their fraud detection systems are total crap.


They do this for all new accounts. It's a way to harvest phone numbers from unsuspecting victims of this surveillance.

It doesn't matter from what ip, machine or whatever you register. It will automatically get suspended because I think they've realized it's easier to force people to enter their phone numbers in "protection" after they just created an account rather than to just ask for it during signup.

Less questions are asked. I wrote a blog post on my now deleted blog, but discussions on HN was here: https://news.ycombinator.com/item?id=19487304


https://www.theverge.com/2019/10/8/20905383/twitter-phone-nu... Twitter caught using phone numbers for marketing purposes.


I was assuming something like that. Very shady.


Was it actually suspended, or did they just need you to verify with phone number?


It says "Your account is suspended and is not permitted to send Tweets". Why would they need to verify a phone number? I did not give them one.


In an attempt to reduce sock puppet accounts often used to harass others, Twitter can very quickly "require" you enter a and verify a phone number when you make an account. Unsure what the criteria is.


twitter, seems to have a cowboy engineering culture. that's why one of their exec's blamed rails for their failure to combat harassment[0]. n I bet now, if they still ran rails, it would've been blamed lol.

[0]: https://char.gd/recharged/daily/twitter-blames-ruby-on-rails...


"...a rudimentary web-application framework that made it nearly impossible to find a technical solution to the harassment problem"

To me, this is analogous to the perhaps undeserved "the internet is a series of tubes" lampooning, but I'm still chuckling how they managed to word that so poorly.


It sounds like a perfect answer to those claiming "harassement" is a technology problem.


Emails are overrated as an anti-troll measure. Anyone can make a trash mail in seconds.


Should there be citizenship requirements for access to customer data at that scale? Background checks? Security clearances?[1] When you have so much private data and the ability to put words into people’s mouths, aren’t you a national security asset at that point? Today it’s some bitcoin scammers, tomorrow it’s Russian or Chinese intelligence. If I was in charge of Russian or Chinese intelligence, I’d make sure that my citizens working inside these companies are using that data to my advantage, or are at least positioned to should an opportunity arise.

There is already tons of evidence of Chinese nationals coming to the US to work at these companies with the express purpose of stealing trade secrets and sending them back to China. Why would the Chinese government stop there? How about your personal emails, your Twitter DMs, etc.?

Citizenship is loyalty. That is what it means legally and what it has meant in practice. Especially if your family is still in your country of citizenship.

Yes, this would mean the international segmenting of the internet, at least in terms of which websites you plug your personal data into vs. “just browse”. This strikes us nerds as awful. But perhaps anything else was just a naive fantasy. The last decade should have shattered our innocence. What happens online matters for great power politics, and great power politics matters a lot for ordinary people.

[1] The current security clearance process is at least partly a jobs programs for people with boring, unadventurous youths. I’m not advocating for that, just the principle of a security clearance.


You know, I used to think that locking down certain websites to citizens of the country the website resides in was a bad thing.

Now with the advent of all these apparent "bots", "state actors", etc. etc. I'm starting to think it might not be a bad idea.

There's a bunch of "what-ifs" however like "what if the government starts removing content it doesn't like", "should you be able to be banned from the platform?", etc.


At least within the US, I think sufficiently large platforms should not be allowed to censor on the basis of viewpoint. But that is exactly the kind of political question that nation states, not international forces, should be answering.


The biggest problem is we've allowed them to become so big they're capable of doing that. The rise of censor culture hasn't helped and is heaped on by advertisers and the media.


That's a very US-centric view. Twitter has a lot of non-US users as well. In fact, a Dutch right-wing politician was apparently targeted in this attack.[1] How would such a requirement help in this case?

[1]: https://www.reuters.com/article/us-twitter-cyber-netherlands...


Maybe the Dutch should do the same thing. Or throw their lot in with a country or group of countries they trust (EU, EU+x, NATO, etc.). The geopolitics of this would be complicated. But that has been life for small polities for thousands of years.


So I, a European citizen, wouldn't be allowed to see the Instagram posts of my American friends anymore? That doesn't seem practical.


You can federate services in ways that allow entities in different countries to control their own user data, while still allowing interactions between users in different countries.

Your private messages with other EU users might be stored only in Europe, with only Europeans able to access it. To the extent you message with people in another country, those controlling the federated service in that other country would have only the needed access. I don’t think this is groundbreaking technologically.


What if you're an activist and your government _is_ the enemy?


Right, thousands of people with admin access and nobody could help me reinstating my API access....


Apparently you gotta have that coveted Verified badge or be an influencer of some sort


There is all this talk from those successful companies about security and what you should do with your keys and they open source hardware secret stores and brag about it and they fail at the most basic security operations.


I wonder if they automatically turned off log in with twitter to other websites. Seems like the bigger hole is that they can use these credentials for any people using twitter to log in using oauth.


Worth mentioning only 5,000 people work at Twitter.


There are ~330 million active twitter users, which means 330,000 users per employee with access to admin accounts.

That ratio is massively high compared to a large corporate (i.e, a global bank). In a typical global bank lets says there are 100,000 employees, with about 25-50 IT people with the rights to admin accounts (from first line support to third line engineers) that's only 2,000-4,000 users per IT admin person.

Based on that, I'm surprised that it's only 1,000 staff members in Twitter with admin access, and not the whole company.


I am not sure why this bank example keeps coming up. Almost no twitter user tries to contact support like they contact their teller for their bank. It’s really bad that 20% of the company had access to user data. No wonder it was abused in the past. https://www.buzzfeednews.com/article/alexkantrowitz/how-saud...


> There are ~330 million active twitter users, which means 330,000 users per employee with access to admin accounts.

I think we should look at how many daily requests they get to reset account access settings (that cannot be done automatically - via some system rather than through these 1k users).


Twitter is an advertisement company so most of the employees will be in sales and marketing. Those probably don't need admin access.


This should be a wake up call. Thank god the malicious messaging was only limited to a tiny Bitcoin scam. Imagine if they had pulled this off on the accounts of national leaders to stir hostilities or violence.

What is the recourse for this kind of failure? I suspect there is none. Twitter is shielded from lawsuits for its content. If this is provably negligent behavior and resulted in actual physical harm it are we supposed to do nothing and simply hope it never happens again?

I cannot fathom what I would do if I were in the position of Timothy Klausutis: https://www.washingtonpost.com/politics/widower-of-late-joe-...


GDPR is "supposed" to hold companies responsible for breaches and bad access controls. This is a different beast than liability protection for content.


Anyone watched Westworld? The whole enterprise is destroyed (almost) by 2 low level employees. It's either a complete blooper in the script or --after I read this article-- reflective of the real world that I don't know about. Your take?


> implication that a hostile government might be able to cause even greater havoc.

it is stuff like this that make me question the whole article. like yes, obviously this was no "hostile" government since they were just scamming for some pocket change. but also how exactly would this hostile government create havoc with twitter?


There are so many government officials on Twitter, and causing any number of them to tweet something plausible but untrue could be a big deal - from moving markets to moving troops.

Just imagine if Donald Trump's account tweeted that Antifa should be shot on sight. I'm certain people would die because of that.

Or, perhaps slightly less plausibly, that Boris Johnson tweeted that he's had enough and is abandoning negotiations with the EU. That would cause a frantic reaction from the markets before any official statement could be put out.


If people are taking Tweets as actual government announcements then they have too many screws loose. Dumb people do dumb shit, what else is new?


well, I worked on a software house that makes software for industry automation. each user of the software has all their actions logged and time-stamped. if you edit something, give a big discount, granted permission, deleted something... it all goes into a different DB filled with just the logs. why doesn't Twitter have something like this? am I missing something?


If you're banning thousands of people, deleting thousands of tweets and updating thousands of people as routine (customer support), it might take time for someone to review what you're doing.


"Only two people can launch a nuke, the president, and the engineer who installed the system."


Title corrected : More than 1k people at Twitter had ability to aid hack and chose not to.


> Title corrected : More than 1k people at Twitter had ability to aid hack and chose not to.

This is a stupid way of looking at it. Similarly:

- X number of people owned guns but they chose not to go do a mass shooting.

- X number of cops could kill a black person, they chose not to.

While it's a good thing that majority of the people know right from wrong, morality, etc, we still need to ensure one person can't do significant damage.

The fact that there are 1000s of individuals that could have hacked is not a good thing.


Interesting


On a different note, online presence is becoming very important and with remote work culture gaining traction, having a good online presence has become a must have asset.

I bought a course on building Twitter audience and been able to improve my following significantly from past 2 months.

Twitter link: https://twitter.com/sunilc_

If you're looking to increase your social presence too, here's the course that I found very useful:

https://gumroad.com/a/238777459/PBkrO


Spam motivational quotes and hope people retweet and like?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: