Marriott's incident page [1] links to a Q&A page [2]. Apparently the forthcoming sorry-we-lost-your-data notifications will come from "starwoodhotels@email-marriott.com".
"Let's immediately set up a separate domain name that looks like ours" remains one of the weirdest antipatterns in incident response.
Is this to purposefully increase likelihood of getting caught in a spam/phishing filter? So they claim they've reached out while also (probably correctly) claiming it's not their fault if customers didn't get it.
Interesting theory. My theory was that there's incident management contractors get this sort of business and don't want to bother integrating with any existing company's infrastructure, so they just set up something entirely different.
Probably not so much "don't want to bother" as "can't do it in a timely manner because of the company's internal processes"
Once of the companies I work for has all kinds of crazy domains because the IT department and the Communications Department don't get along the way they should.
Close to my theory! Basically, they need to send out millions of email fast. This email, with a bunch of legal text will probably have a high 'mark as spam' rate. This will destroy the domain's marketing ability. SO! The marketing guy won the argument in the meeting: don't use the root domain.
I think the stated reason is usually "a single place users can go to directly", the least nefarious reason is that they don't want to associate the breach with their main site, so only affected or inquisitive customers will know about it.
Anti-pattern does seem like a strong enough word sometimes. These domains are available:
emai1-marriott.com
email-marriot.com
e-mail-marriott.com
I wonder whether it might be better if governments took over the notification side of things. Something like "notice@databreach.gov". Companies could pick from a few standard templates and get charged $1.00 per email.
> Is it time for us to simply accept that it's inevitable that, at some point, everything will be hacked, and hacked often?
I disagree. I’d take the Economists route, which is looking for the incentives that drive motivation. If companies were held to a higher standard of accountability, imagine how many would beef up their security. For decades, security researchers have been poking fun at how ridiculous some of these sites are at handling security, and nothing ever happens.
Now, imagine if there was severe economic accountability to a company that was hacked. Perhaps payouts to each person affected (in this case, to all 150m). I imagine you’d see security become a top priority very quickly at most companies.
As a developer, do you really want to live in a world where "security is a top priority" at every company? Does such a world even make economic sense after accounting for the opportunity cost of the time most that developers would otherwise spend actually building new products and features?
While companies could probably do better than they are right now, hacks like this are probably never going to be eliminated. There are too many companies and too many developers for nobody to make mistakes, even when they're being mindful not to. Investing in solutions that assume hacks will happen seems reasonable to me.
Yes, yes I do as a developer! The thing is that a lot of these "hacks" aren't even that sophisticated. A lot of them are engineers not paying enough attention. The security dimension of many, many products can be improved tremendously picking off some low hanging fruits. Ever since companies like Google pushed for HTTPS, it's proliferated all over the place. Just by Google emphasizing it and talking about the need for secure communication even inside one's own network, my own company started doing the same. Enabling HTTPS and SSL wasn't that hard, especially since companies Let's Encrypt came along. It just wasn't prioritized. Once it was, our engineering team made it super easy to get certificates from LE and we all learned standardized ways for securing our traffic. Security is often low priority because people are really bad at planning for unlikely events with potentially catastrophic consequences.
I'm not saying we can be invulnerable but we need to raise the lowest common denominator so that it's not a walk in the park to steal millions of records. You just need the weakest link to make everyone vulnerable but I do think positive collective behavior can counter that -- especially when you make it easy with things like Let's Encrypt.
> Yes, yes I do as a developer! The thing is that a lot of these "hacks" aren't even that sophisticated. A lot of them are engineers not paying enough attention.
I dont think you have quite thought it through. Do you honestly want to have to do code audit on all libraries you use? Freeze all versions? Have a chain of signoffs for every change?
I have briefly done consulting in a place like that -- developers were absolutely miserable. Think about every single corporate IT policy that exists and apply it not just to your desktop/laptop/phone but to what you do on that desktop/laptop/phone.
I dont think you have quite thought it through. Do you honestly want to have to do code audit on all libraries you use? Freeze all versions? Have a chain of signoffs for every change?
If developers demand that the tools they use are better built, then the market will deliver tools/frameworks/etc... that are secure from the start.
"Good" coding has become "good enough" coding, and the problem exists from the bottom of the stack to the top.
> If developers demand that the tools they use are better built, then the market will deliver tools/frameworks/etc... that are secure from the start.
This is never going to happen because what is considered secure in one place is not considered secure in another place.
> "Good" coding has become "good enough" coding, and the problem exists from the bottom of the stack to the top.
Because it is about risk management, not about absolutes. It is absolutely irrelevant that a smart samsung TV that I have in my office has garbage security because it is used as one thing and one thing only - dumb 48" HDMI monitor not connected to wireless network. Its Wifi antenna connector has been cut. It matches my risk profile.
It’s not “either or”, but as someone who has worked at various places along the spectrum of practices ranging from “default password is password” to DO-178B [1], I greatly prefer environments with strict and rigorous design, testing, change control, and security auditing. The chaos of moving fast and breaking things (and fixing them, and breaking them again, and fixing them again, then getting hacked and having to pull 24 hour days to mitigate...) is a recipe for burnout.
If DO-178B were to be applied to "internet" I would not be surprised if we were thinking that uucp is amazing invention in 2018.
I'm going to repeat it again - we do not have a security problem with software. We have a risk management problem.
There's absolutely no reason for Marriott store information on previous guests past certain statue of limitations. In fact, they could probably offloaded it to Iron Mountain after 180 days. Storing it online has a certain risk profile. That risk was not correctly evaluated ( probably not evaluated at all ) and hence it was not minimized.
Storing credit card information ( even encrypted ) after the card was charged and transaction creates another risk profile. It also was not evaluated and it was not mitigated.
Businesses are obsessed with data without understanding the risk.
'As a car designer, do you really want to live in a world where "safety is a top priority" at every company? Does such a world even make economic sense after accounting for the opportunity cost of the time most that designers would otherwise spend actually building new products and features?'
Most professions and companies are (at least in theory) held accountable for their impacts.
No car on the market is as safe as the absence of a car. Car companies make tradeoffs towards safety where it's reasonable and economical, but still fulfill their baseline mission, which is inherently dangerous. People are injured and killed in car crashes every day; car companies are not "held accountable" unless there's a specific defect and they should have known better.
Such as a company knowing better than to keep their servers patched, to have a process to make sure their servers are patched, to have a process that shows a list of servers that are _not_ patched, etc.
There are a lot of really stupid mistakes made in a lot of these data disclosures that a competent IT team (and dev team) can prevent from happening. The current state of things is that there are hardly any consequences for losing people's data, just make a bulk purchase of credit monitoring and call it a day. This is cheaper than actually hiring the right people and implementing the correct processes.
As a car driver, do you want to live in a world where "braking for pedestrians in crosswalks is a top priority" on every trip? Does such a world even make economic sense after accounting for the opportunity cost of the time that most drivers would otherwise spend moving toward their destinations?
Haha THIS is spot on. Sure, 1 person's address isn't the end of the world ... but 500,000,000 people's information in 1 incident is class action material
And it's not like I'm advocating that every single company needs bulletproof security that can stand up to nation-state adversaries with budgets bigger that the company, I agree with GP that it just wouldn't be economical.
To stretch the car/driver analogy, you could limit all cars to 10 mph so that they can stop fast enough when a deer runs into the road unexpectedly, but that's probably not worth the tradeoff.
Pedestrians, on the other hand, are a predictable fact of life that you need to deal with when you get in a car. So are bad people on the internet. If you put something on an internet connection and aren't constantly aware of that, you should not be putting it on the internet.
Car companies absolutely quantify risks and make decisions based on it. It is still more about bottom line than safety. When a version of a car fails some tests, they will estimate the cost of a recall versus the cost of a lawsuit. Whichever is smaller wins.
I really wish more developers had at least a basic ethical grounding and didn't just go "fuckit, revenue!". (Or, in larger companies, "fuckit, my boss told me")
And when you consider opportunity cost - even just double-checking you aren't affected takes a minute of time, as a consumer, that means this hack just wasted close to a thousand years of human life.
Where's the accounting for the opportunity cost of that?
There is no such thing as being "done" with security. You can as deep as you want, with as large a team as you want, and never be able to say "okay, we're secure now."
If basic ethical grounding requires security to be the top priority, and security work is inexhaustible, then it must be unethical to ever work on the product being secured.
No, but there is such a thing as "following best practices".
An ethical approach requires you to reason about which actions are moral, not to be "done" with something. As I said, even a basic knowledge would be really helpful.
It's easy to reference amorphous "best practices." As Tannenbaum said, it's nice to have so many to choose from. The real challenge is deciding which practices apply, and what authority to figure to recognize when determining "best."
I agree. But following best practices is a completely different thing from treating security as the top priority. Best practices include tradeoffs that balance security risk with cost and businesss needs.
There is no such thing as being "done" with safety. You can go as deep as you want with as large a team as you want and never be able to say "okay we're safe now."
If basic ethical grounding requires safety to be the top priority, and safety work is inexhaustible, then it must be unethical to ever work on the product being safe.
Absolutely correct. Safety is about managing risks, not eliminating them completely at all costs. An airline which truly saw safety as the top priority would never put a plane in the air. Making money is the top priority; safety (or security) is one consideration that influences how you go about it.
As an architect do you really want to live in a world where "structural stability is a top priority" at every company?
Does such a world even make economic sense after accounting for the opportunity cost of the time most that building designers would otherwise spend actually building funky new shapes?
Investing in solutions that assume buildings will collapse seems reasonable to me.
I want to unpack a few assumptions before responding.
1) There are 150 million vehicles which can be remotely controlled via the vehicle manufacturer's software, which has generally mediocre application security.
2) The software in question is vulnerable to SQL injection, allowing up to 150 million vehicles to be remotely commandeered by a small group of attackers.
3) No hostages are taken and no owners of cars are deliberately harmed, because this is an application security scenario and not a kidnapping scenario (which is orthogonal).
The scenario you've posed is oddly florid...thinking through it, no, I don't think the robbery of 150 million vehicles is as serious as a bridge collapse with 50 (presumably occupied) vehicles on it.
Speaking more directly to the point - I think this is a really poor comparison. Logistically speaking it's hard to take seriously the idea that 150 million cars would actually be stolen because of any single SQL injection vulnerability. SQL injection is really bad, but it doesn't directly result in injury or loss of life. It's also hard to conceive of a situation in which SQL injection has the potential to cause systemic collapse like you're describing...maybe SQL injection to a database containing credentials that have write access to a server which can launch ICBMs?
In the modal case, I think it's okay to admit that application security is not as serious a concern as architectural stability. But this entire discussion is pretty much a sideshow; we can just all agree that security needs to be taken seriously and that some bureaucratic scar tissue is okay to make that happen.
It is funny that after all posturing in this thread @throwawaymath's post which is one of the few discussing risk assessment is getting downvoted while posts throwing around absolutes and lofty goals are getting upvoted.
It's hard to imagine an SQL injection scenario ever being worse than structural buildings collapsing (Rana Plaza Bangladesh comes to mind), but anything is possible I guess.
Yes. If I have to give up my security and privacy so that you can have less friction, then I most certainly want to take that away. If the online business or organization can give you less friction or security without impacting my security or privacy, then I do not see an issue.
The situation is identical to you wanting to have an untamed lion in your back yard. Provided you have the right security in place to ensure it can't hurt me, your neighbor, then the litter box is your problem. If however you do not have the right protections in place, then I have every right to ensure the lion is removed from the neighborhood.
Yes, and it will feel weird, but developers who value privacy will have the insurance companies backing up their advocacy for storing less data on customers.
If you don't store valuable data, you won't have large premiums.
If your business model requires storing such data, you better have the revenue to pay the premiums.
As a developer, I want to live in a world where I can make a business case for security of something along the lines of "if we don't do this, we'll be hit by crippling fines"
The problem with the internet is that security is an after-thought. The solution is to build security into the communication protocols, and that involves data structures like merkle trees and blockchains.
Why do we need a blockchain or a merkle tree? We have TLS, SSH, PGP, a number of VPN solutions...blockchains and merkle trees are consensus and versioning protocols, not security protocols. Their use of cryptography is orthogonal to the traditional security goals of confidentiality and authentication.
The problem is that incentives aren't aligned. Companies don't care about your personal data they only care about your metadata, so they won't invest resources into protecting your personal security. TLS, SSH, PGP are all communications protocols, they provide no rules concerning value exchange, which is what account creation is. When you create an account on a 'free' platform, your de-facto making an exchange of data for value. The issue here is that the transaction is one-sided because there are no guarantees on personal security. If your account information follows you around the web in the form of a public key, then your in control of your personal security.
As a developer yes. If that cost can't be baked into building new products, either the developer needs to learn how to emphasize the importance, or that company needs to go out of business.
That's wrong for many reasons. Others have covered the simple fact that you couldn't start any app with lots of users and zero capital. The downside is huge. Barriers to entry become more impossible than they already are.
But that's not the worst of it. The Economist here is doing a static analysis, oddly enough. They're making the simple observation that if things cost more or have more risk, they get more attention.
That's if they have more risk today. Once you collect data, it doesn't go anywhere. Every bit that sits on your servers can easily be copied to another server, today, tomorrow, ten years from now. Do you know what all the bits are on your computers?
This isn't copyrighted DRM or porn. You could have a blob hashes and userids. If I put that on your computer, would you know? Could you be expected to find it? Know what it was?
As Facebook and the other platforms are demonstrating, this data continues to have value many years after it was collected. And once somebody gives some data to you, it's effectively both invisible and trackless. Over long periods of time, your cost becomes infinity to maintain this risk. Meanwhile, attack vectors get better and people come and go out of your offices all the time. Could you manage that risk? Forever?
I can't think of _any_ sensitive data on the web that's stayed safe. Why would attaching any amount of value change that?
That's hardly even a speeding ticket for Uber. As long as the fines are this low companies of sufficient size simply treat this as a cost of doing business.
It appears that the maximum fine is 4% of a corporation's global earnings[1] which could be a lot of money, but still "just a cost of doing business" at the same time.
Uber is somewhere around $10b gross revenue, so $400m fine for every breach. Sure it's "just a cost of doing business". It also means that it's better to spend $200m beefing up their security to reduce from 1 data breach every year to one every 5 years.
Marriot revenue is $23b, so that's a potential $920m fine.
IHG (say), who invest in security and don't have a breach, get to charge less for their hotels, or make more profit.
I thought the same thing, but I was corrected here on HN: if you read the same exact like you posted, it says "a fine up to €20 million or up to 4% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater", so the they ARE allowed to fine you EUR 20 million.
Much more than "just a cost of doing business" for the majority of companies.
Fining the company does nothing for the user whose data got leaked. Identity theft isn't a matter of degree; deterring future leakage has zero value. Either there's enough information on the black market to impersonate someone, or there isn't.
So far the market has decided that she economics for protecting users and protecting data just isn't there and that's why we see what we see.
That's why GDPR happened. "Ok, if you're not going to do anything about it, we'll make you do something about it."
So you're not taking the economists point of view at least from the perspective of the free market rather you're thinking about which economic levers you could pull to effect change from a regulators point of view.
Either your information is known to an attacker, or it isn't. Great security "at most companies" in a hypothetical future doesn't help. You need security better than the best attacker, at every company, all the time.
That's a pipe dream. Instead we should take advantage of public-key cryptography, so that authenticating to one company does not leave behind infinitely reusable credentials for others.
Everyone who knows about security already accepts that everything either already is hacked or will be hacked eventually and no one can stop it. There's a famous quote and I'm not sure if it's originally from former FBI Director Robert Mueller [1] or former Cisco CEO John Chambers [2]:
"There are two types of companies: those that have been hacked, and those who don't know they have been hacked."
Seems like it. Bank vaults are difficult to get into, sure. But they're especially difficult to get into without being detected. Go ahead and force your way in if you want to. You most likely won't have long to enjoy it since you've been witnessed in person, by security cameras, and are now carrying marked bills.
Perhaps in the digital world as well, intrusion detection is more valuable than intrusion prevention?
I don't know what your background is, but IPS is a huge deal already. Tens of billions of cumulative market cap in the space. Your physical "bank vault" analogy doesn't model the threat very well for many reasons. If you happen to know how to solve this, though, you will be incredibly wealthy.
The digital world does not require my physical presence which means that even if you detect me, odds are you can't even identify, much less apprehend me. Bank robbers have no such luxury.
The most valuable data is just necessary to do their business though. Our financial and identity information are the things we hold most dear, but that's also exactly what we need to exchange in order to engage in commerce.
To a degree, yes, which is why it has to be secured properly, with state-of-the-art systems and engineering. I'd venture a guess to say this wasn't. But until we hold executives personally and retroactively responsible, nothing will change.
Not indefinitely, but typicaly IRS or similar institution will require you to store all the transaction records for, say, 5 years. Which, from the purpose of hackers trying to steal your data is as good as "indefinitely".
I really want to know why identity verification can't incorporate better technology, since driver's licenses and passports and such have been upgraded but are still non-interactive (from the user's perspective) and can't prove identity remotely.
i.e., instead of the traditional "what are the [last 4 of] your SSN, and/or we'll tell you three things that may or may not be in your credit history and you have to fill in the blank on each of them"...
...why not just use 2FA?
You give everyone a TOTP code on a separate card but tied by the government to your SSN, passport number, and state ID. You provide a government mobile app that they can use if they don't want to use a 3rd party one. When some third party wants to verify your identity, there would be a heavily secured, simple, autited government server that you'd use the app to auth to (ssn + TOTP), returning a temporary auth code/passphrase, stored for 1 day and associated with your SSN. You give that temporary code to the third party, which then verifies that temporary auth code or passphrase with that same government server. You could have an additional voice phone channel to get the temp codes, for people without smartphones.
If your TOTP code card and device are both lost or stolen, you visit in person to get a new one just like normal. Anyone who sees the card can impersonate you, but you shouldn't be carrying it around or waving it around, and even if stolen in individual cases, the scheme eliminates mass identity theft.
U2F could be an option, for anyone with a u2f-capable hardware security key or smartphone, but I'm not sure about mandating u2f because compatible hardware has a non-trivial marginal cost.
Because the cost of "verifying identity" is a feature! The easier it is, the more frivolous uses of it there will be - imagine basic forum websites deciding the easiest way to cut spam is to demand real world IDs. Google's new practice of requiring of a phone number to make an account would look tame in comparison.
This very case is already an example of overreach. The only reason a hotel needs someone's identity is in case they trash the room and skip out. There is absolutely no reason that it should have been kept after checkout, except that we've been groomed to expect this surveillance based on payment cards being similarly broken.
I can see the possibility for reasonable progress in the EU, where a government ID could carry rules that it couldn't be involuntarily used for business purposes (and assuming that would actually last). But in the US, the government will mandate some base system and then let companies abuse it ad infinitum - even social security numbers are already way too much.
If a forum had the option of requiring real-world ID, they'd make the reasoned business-case tradeoff of whether that's likely to improve the forum/business (less trolling) or worsen it (far less signups because people want to be anonymous on that particular forum).
And as for hotels specifically, they require a real identity for public safety, so police can (not just at check-in, but even months or years later) determine where someone stayed in order to solve cases of murder, child trafficking, or other violent crime.
From the perspective of the user, frivolous (and hostile!). The present state of affairs serves us well enough, and we can gauge how many sites would casually increase surveillance by eg the trend of rejecting mailinator addresses.
There's this gravely mistaken idea that a business's interests are fundamentally based on serving customers' interests. But both sides of any transaction have diverging interests, and in the real world we see industries uniformly implement arbitrary customer-hostile practices rather than compete. This is especially true when the customers' downsides are not easily quantifiable - see the entire advertising-surveillance industry.
And sure, there is always some prudent-sounding reason why more centralization is needed (essentially the "God narrative"), which is exactly why the safety-above-all ratchet moves ever forward. But in the free world this type of thinking is a red herring - the same scare-reasoning can be applied to mandating that every person have a machine-readable ID code tattooed on their forehead, and this only seems unreasonable because it's not present custom.
Because people who should know better are happy to support legal liability around the protection of shared secrets. Companies almost always prefer to protect legacy systems with lawsuits/regulations than to actually harden them.
You could say the same thing about corporate lawbreaking. It's likely given enough time and employees, but that doesn't absolve the company of its responsibility and liability.
We need tough, enforced penalties for data breaches, plain and simple. It's a negative externality, just like pollution, and so can only be controlled by regulation.
> It's a negative externality, just like pollution
No.
Your leaked data does not hurt me. So it is NOT like pollution.
If you do not like your data on the internet - do not give it to companies you do not trust.
> can only be controlled by regulation
Your claim is wrong.
Companies behavior is controlled by customers demands. The balance between security and convenience -- is not an exception here: customers demand define where that balance is. There is no need for the government to intervene in this case.
Past time. There are way too many people comfortable with giving companies date of birth, passport and social security numbers, as forms of ID. And therefore the sane and aware people who don't want to give up that information, are left as non-participants because they are the minority. And therefore I think there needs to be law making it illegal for certain personal information to be stored. Marriott can sufficiently confirm my identity by getting my name, address, phone number, and a credit card number. The latter can be used to verify the former via a 3rd party, the issuer of the credit card. And this verification doesn't require storing the credit card number.
Anyway, these companies are bad at their job as evidenced by the breach happening. And I definitely think we're past the time where it should be illegal for companies to even ask for passport numbers, DOB and social security. A phone company, a hotel, need none of that. They just want it. Big difference.
You're only thinking of transactions. Physical location history (which hotels they frequent, and when) is damaging enough to HNWIs. Name + location + timestamp is sufficient to prove physically dangerous when leaked.
What we need are more options to transact anonymously. This "show ID for everything" culture needs to stop.
I see two possible readings of your comment, which are almost opposites. One is that we should all learn to deal with this much data about ourselves being exposed. The other is that the root cause here was how much data they collected about us in the first place.
I really don't understand why so many companies think they need so much information about me. Or even if they do need it, why such disparate data as passport numbers, credit card numbers, email address, gender, and home address would be stored in the same database.
Is there a law that requires hotels to collect all this data? I've stayed at some cheap motels where they glanced at my driver's license and accepted a couple of $20 bills, and I got a key, and that was the entire transaction.
I'm starting to wonder if it would be a good thing if the data from one of these broad hacks was widely distributed, forcing companies to come up with a better way to verify "identity".
If companies knew that there was a database that everyone has access to with all the data you need to signup for new sources of credit for 50% of US residents, there would be a very strong financial incentive to actually fix the problem.
Maybe, but there's a legitimate fear that this thinking will eventually lead to everyone being required to be implanted with a microchip to participate in the economy
Nope, I am a supposedly successful and functional american adult working a 9-5 like many of yall here. I said 70 years because my life and interests extend beyond just myself. I want 70 years of growth of my net worth to pass on to my children and give them all the gifts that have been given to me.
Yup I wrote that sittin at my office desk in midwest with a jupyter notebook and pycharm open cranking on unit tests =) Lord help us if someone doesnt use perfect prose on the HN! If it makes you feel any better my code is worth 8 figures to some random suits, for better or for worse.
> “Usually when stolen data doesn’t appear, it’s a state actor collecting it for intelligence purposes,” said James A. Lewis, a cybersecurity expert at the Center for Strategic Studies in Washington.
Not to downplay it but my email shows 3 breaches in haveibeenpwned.com, and I haven't had any problem till now, apart from probably more emails in spam folder.
Not entirely on topic, but Marriott seems to be dealing with some internal phone abuse issues as well - calls going directly to hotel rooms (bypassing the front desk) and asking for card details to fix broken incidentals records. I got a call like this yesterday and found out that it's enough of an issue that they've printed out signs in the lobby warning guests to not hand out information.
That's interesting. I've had a couple of calls like this in the past and have always insisted on going to reception instead of giving details over the phone. Thankfully mine turned out to be genuine.
Note: This has affected Marriott's "Starwood" division.
Starwood's hotel brands include W Hotels, Sheraton, Le Méridien and Four Points by Sheraton. Marriott-branded hotels use a separate reservation system on a different network.
According to the article the systems were merged 3 months ago.
"The company resolved one major issue involving elite-night credits earned from credit card spending just last week, more than three months after the integration. That problem left many members in limbo, unsure of how close they were to hitting elite-level thresholds before year’s end."
The intrusion was detected on Starwood's system in September according to the BBC article.
"On September 8, 2018, Marriott received an alert from an internal security tool regarding an attempt to access the Starwood guest reservation database. Marriott quickly engaged leading security experts to help determine what occurred. Marriott learned during the investigation that there had been unauthorized access to the Starwood network since 2014. Marriott recently discovered that an unauthorized party had copied and encrypted information, and took steps towards removing it. On November 19, 2018, Marriott was able to decrypt the information and determined that the contents were from the Starwood guest reservation database."
This sounds more like Marriott having better monitoring and once the DBs got merged they figured out somebody had been in the Starwood network for four years.
Assuming poor security practices is the new assumption of monetary debt from a merger. One wonders what it will take to get companies to take auditing potential acquisitions' security practices in greater depth.
As a first rough approximation, this figure includes everyone on HN.
It appears to include everyone who's ever stayed in a room at a Marriott, St. Regis, Ritz-Carlton, Bulgari, W Hotel, JW Marriott, The Luxury Collection, Le Meridien, Renaissance, Westin, Tribute Portfolio, Sheraton, Autograph Collection, Design Hotel, Marriott Executive Apartments, Delta Hotels & Resorts, AC Hotels, Element, Gaylord, SpringHill Suites, Courtyard, Residence Inn, Fairfield Inn & Suites, Moxy Hotels, Protea Hotels, TownePlace Suites, Aloft, Four Points by Sheraton, or Marriott Vacation Club property.
For reference, there are under 130M households in the US and around 200M households in the entire EU.
"The hotel chain said the guest reservation database of its Starwood division had been compromised by an unauthorised party."
"Starwood's hotel brands include W Hotels, Sheraton, Le Méridien and Four Points by Sheraton. Marriott-branded hotels use a separate reservation system on a different network."
edit: the Marriott website itself confirms as much that this is limited to Starwood properties.
" guest information relating to reservations at Starwood properties* on or before September 10, 2018.'"
"* Starwood brands include: W Hotels, St. Regis, Sheraton Hotels & Resorts, Westin Hotels & Resorts, Element Hotels, Aloft Hotels, The Luxury Collection, Tribute Portfolio, Le Méridien Hotels & Resorts, Four Points by Sheraton and Design Hotels. Starwood branded timeshare properties are also included."
> * Starwood brands include: W Hotels, St. Regis, Sheraton Hotels & Resorts, Westin Hotels & Resorts, Element Hotels, Aloft Hotels, The Luxury Collection, Tribute Portfolio, Le Méridien Hotels & Resorts, Four Points by Sheraton and Design Hotels. Starwood branded timeshare properties are also included.
"It said some records also included encrypted payment card information, but it could not rule out the possibility that the encryption keys had also been stolen."
I'm not quite sure whether irreversible encryption is a thing in this context (its sometimes used to talk about e.g. password storage, but I think that's a very different use-case), so I'll sally in with an interesting tangent:
Homomorphic encryption is a form of encryption that allows
computation on ciphertexts, generating an encrypted result
which, when decrypted, matches the result of the
operations as if they had been performed on the plaintext.
Homomorphic encryption can be used for secure outsourced
computation
...
If the variant of the homomorphic encryption you are using is secure, then the bad guys can not generate specifically crafted cyphertext that would decrypt into the result they specifically want.
I would agree with this sentiment, however, there are a number of things that you can to to make the job of the attacker harder, or to notice internally when there is inappropriate usage. Service abstractions to encrypt/decrypt the data will limit the surface area where you might store keys. Logging usage to know what operations were made. Also, you can use a system like vault such that the keys are not extractable to do all the crypto operations offline thus massively increasing the chances you would detect such a breach / unauthorized access.
In my early days, I had to write a payment processor and it pass a PCI "inspection."
TL;DR - PCI compliance was a joke in my experience.
The payment page didn't log requests (made figuring some things out hard), it submitted the encrypted card data (public key) to another server through the firewall with only a single non-standard port open, to the data store with write only access. There was then a trigger on insert that would launch a small console app that passed the decrypted (private key stored locally) card and charge information to Authorize.Net, if successful it would then write a success to the charges database, and send that charge identifier to the application's data store.
Only access to the payment server was from behind the firewall over RDP and MSSQL over non standard ports.
Just the description of this satisfied the PCI guy, and no physical inspection of the hardware was ever done. I'm not sure if what I did was best practice, and I was always so scared it would get hacked. Thankfully after so many fraud charges that my boss couldn't handle the $15 chargebacks we switched to paypal and I no longer had sleepless nights.
Can anyone recommend solutions (for those that don't know). On how to have an encrypted database for sensitive information, i.e first name, last name, ip, geo data, etc and the encryption keys not be available to hackers when they have essentially gained root?
Segmentation can help. For example, you can use envelope encryption[1] that keeps your keys on an isolated, dedicated key management server that prevents even the admin from exporting the master keys. Therefore, decryption of the data keys must be performed on the key management server.
It’s still not perfect because an attacker can potentially send requests to the key management server, but they shouldn’t be able to walk away with they keys to perform decryption outside of the system.
You then set up monitoring to watch how many decryption operations are being performed per minute or hour and alert admins when it steps outside of normal useage patterns.
It’s not perfect but can help you to 1) catch an attacker early and 2) have some type of estimate of the size of the breach when discovered. A very patient attacker may not trigger an alarm of decryption operations but in that case he’s got to work much slower, which limits the scope of the attack while they hopefully trigger something else that exposes them.
Have a server responsible solely for decryption and audit access to it. The server doesn’t issue keys, it literally does the decryption. AWS KMS can do this for you.
The gist is you create an encryption key for your row, encrypt it using your encryption service, and store it next to your actual payload. To decrypt, ask the service to decrypt the key which you use to decrypt the payload. If your database gets popped, your decryption server hopefully didnt because you hardened it specifically.
Well, it's still tough. The question is always "gained root to what", cause at some point somebody needs a key.
Envelope encryption is a good scheme. Under envelope encryption you have a master key and per-row or per-unit keys. You store the per-unit keys encrypted with the master key, and the master key is stored offsite.
For example on an AWS environment, you would use AWS KMS with IAM authentication to handle the master key crypt for the individual keys, and the encrypted versions of the individual keys are stored in a database that you own.
You encrypt your data with the individual keys (eg, one key per user, or one key per DB row, or one key per namespace, whatever), but the individual keys are decryptable only by KMS.
Under this scheme an attacker must both be able to get ahold of the encrypted individual keys, and be able to decrypt them with the master key.
Of course this leaves you very vulnerable if the master key is acquired, but KMS does not allow you to read the master key at all, you have to use the AWS API to make encryption/decryption requests. So the vulnerability is less about how secure the key is and more about how secure your IAM setup and instances are.
The application necessarily has access to the credentials it needs to do all of this. You're only eliminating the subset of attackers who can get in, but not figure out how to imitate your use of cryptography. Is the cryptography part even relevant? Since the attacker ultimately does have use of the key material, there's no information-theoretic security provided by the encryption, only some extra work to do. Wouldn't any other indirection/obfuscation accomplish the same goal?
Or are we assuming that the decryption service enforces some policy about what and how much to decrypt?
Well, yes, if you're using AWS KMS you are able to set up policies, revoke access through IAM, use alerts and monitoring, and so on.
But you're also forgetting that most data breaches in real life involve database backups, staging servers, development environments, and people with access. So envelope encryption really helps prevent those. Even if someone gets a hand on your DB dump they're unable to use it without authorization to the KMS, which they won't have unless they can also get into your application server.
I guess I’m missing how credentials with access to KMS are any harder to come by than credentials with access to RDS, and why it’s easier to monitor KMS queries than RDS queries. Are the latter privileges more widely distributed, maybe? Where is it useful to have access to a database you can’t actually read? Do your dev/staging environments just no-op the decryption and put ciphertext strings everywhere prod shows plaintext?
Maybe your DB creds are in a secure configuration system and maybe the KMS credentials are granted by an IAM instance role. If only certain users are able to get access to config (ie RDS), and only certain machines are able to decrypt data, you now have a multilayered security approach where an attacker needs to breach both a user and a piece of hardware. If you breach a user and get your hands on a DB leak, you can't use it because you don't have access to the hardware. If you breach the instance role then you can make decrypt requests, but you would have also needed to have breached the data separately and independently.
Don't forget that data is most often leaked by people. DB backups, dev environments, inappropriate transfers. We hear about the big targeted attacks, but probably 90% of actual breaches are due to human misuse.
If you don't need to decrypt the data on the same system, you could always use asymmetric encrpytion, so encrypt with the "public" key and then keep the private key elsewhere.
If you require to be able to decrypt on the system in question then you need a key somewhere, so secure storage of the key is very important.
One option, often deployed in banks, is to store the key in a hardware security module (HSM) and then ensure that it's not available outside of that.
There's a load of tradeoffs to using HSMs but they can be useful where symmetric encrpytion is needed.
This was solve by banks a long time ago with hardware security modules (https://en.m.wikipedia.org/wiki/Hardware_security_module). HSMs make it impossible to extract encryption keys from the device and turn a digital security problem into a physical security problem which banks knew how to manage. Are your encryption keys secure? Yes they are right there.
Typically the HSM will be part of a tokenization service that is located in a separate security zone also called a data bunker or data vault. The customer facing application will pass in the sensitive data and in its place a token will be returned. When the calling application needs the data it will pass in the token and the data is returned.
An attacker that compromised the customer facing application only has access to tokens and would be forced to access sensitive data via the data bunker. That application is heavily monitored and would hopefully be quickly noticed.
The HSM adds an additional layer of protection against loss of sensitive data via backup tapes or some insider attacks.
It's been several years since I dealt with PCI, but the app I was on the team for handled $1B+ annually. Here's just a few points that I remember.
Physically separate the key storage from the data storage. Different servers, different user auth mechanism. Make them compromise both systems.
Unique keys per client.
Use key-encrypting keys, periodically replace them.
PCI environment must be totally self contained and not mixed with standard working environment. If an employee is phished, their credentials for email/etc should not gain access to the env.
Don't allow developers to deploy or even read from prod db. Have a separate deployment team, all access to prod database audited via production tickets.
I’m not sure there is a one-size-fits-all right solution here - but having a system where decryption keys should not be extractable by software alone (e.g. an HSM, a service such as AWS KMS or the equivalents in other clouds) sounds like it would have been an improvement in this case (though with no post-mortem it’s hard to know exactly what went wrong).
Assuming a layered system, something like Vault transit encryption (perhaps with the master key in an HSM) should keep decryption keys away from front-end machines, though as ever, once you have root and can access the memory of a machine where the encryption keys are stored, most bets are off.
The general idea is to insert some kind of bogus information into a system, with no links to other systems, and then trigger a notification if someone accesses it.
One tool is to only get access to this data from the Web FE via an API. This API is then monitored for spikes in usage, so if a given FE is suddenly requesting a lot more than normal it gets flagged. It doesn't stop targeted attacks, but does show the big ones - another reason why security needs to be multi layered.
Keep in mind they've been compromised for 4 years. For that time duration it's not just about protecting data at rest, it's also about protecting data as it is processed inside your systems.
You are already fucked at this point. I would focus my attention on prevent hackers from 1) gaining any unauthorized access. 2) from doing any sort of privilege escalation from a restricted account/service.
Maybe keep the keys only in memory in each application? Pull them from some very well monitored service once at startup? Anyone who gained access to the server would have to check the program's memory to get them.
How is that the "Oh dear."? Do you work for a bank?
That seems like the least interesting bit of information here for individuals.
It's really bizarre to me that people here seem to consider their credit card numbers more sensitive than their travel history or even contact information.
Why is it bizarre? Most people wouldn't consider their comings-and-goings to be worthy of any privacy protections, though some people have non-criminal reasons to want to protect that privacy. Contact information likewise is generally public information, some people may have non-publicized means of contacting them though that they want to keep private.
Generally people's level of care will correlate to what nefarious purposes the data can be used for. There aren't that many such purposes with the data exposed here until you consider the passport number (probably something more secure than a SSN for most Americans?), payment info, or login info. The purposes I can think of for the other data are reliant on the metadata that so-and-so was at that location at that time, when I might have believed something else.
well possibly not. If the attackers got the keys to decrypt payment details that means that victims, in addition to the identity theft risks, may also face fraudulent transactions on those cards as a risk.
Anyone hit with fraudulent charges has to a) notice them, b) dispute them and c) go through the annoying process of having your card re-issued.
As to privacy, after the number of breaches of personal info. there have been, I'd be inclined to let the idea that this level of personal data is private :)
Don't you review your charges regularly? Even without fraud you're bound to be losing money to erroneous charges if you don't.
>b) dispute them
I've usually just had to submit a quick online form consisting almost entirely of checkboxes, while yeah that's some work it's still not going to take more than a minute of my time.
>c) go through the annoying process of having your card re-issued.
Seems like this is vastly more annoying with some card issuers than others.
I notice sure, but that doesn't mean all of the x million people who've just had their cards stolen will.
Also not all of the cardholders will be online customers, so they'll have the delights of call center processes to go through.
Whilst it's obvious you don't regard loss of your credit card information as a serious inconvenience, I don't think that's necessarily a universal sentiment.
I work at a bank, and you would be surprised how many people don't know how much they have in the bank until they get the message saying there's no money.
so many people do this. As long as they don't get that message, they don't worry about how much is in there.
and according to the article: "It said an internal investigation found an attacker had been able to access the Starwood network since 2014."
4 years of access, surely the attacker didn't manage to get hold of these keys
Curious if it matters much that the keys were stolen too. How long to brute force a 16 digit number? With a known subset of 4 digit leaders, and being able to cull the list first for ones with good luhn checksums.
Edit: Ugh. Yes. My mind was apparently elsewhere. Keyspace would drive the time needed.
Though it is a relatively short list of known plaintext. especially if you focus on the bin ranges for say the three most popular banks in the US.
But, only interesting if Marriot was using some encryption with a small keyspace.
> Of course I really hope the Marriott weren't storing CVV in a reversibly encrypted format.
You're not allowed to store the CV2 in any form that could be recovered (i.e. plain text or reversibly encrypted) or brute forced (i.e. hashed/salted). PCI rules say you simply aren't allowed to store the CV2 after the call for an authorisation, as it's no longer required. If they were storing the CV2 then they're in trouble.
The linked article reads like most of that was not encrypted. The CVV wasn't listed as being stored, but CC number, name, and expiry, without the CVV are usable in the US, even online in many cases. A CC charge without CVV doesn't hard fail, so it's the merchants choice as to whether to even ask for it.
But you can buy that information for $1 a pop (or far less if you buy in bulk, think $0.3 or so).
Credit card fraud is far more involved than just getting payment information, you won't succeed at ordering anything of value without understanding how anti-fraud systems work.
Actually using the card information is almost entirely left to the lower-end criminals, it's just ridiculously difficult to scale.
After spending years hanging around in those circles I'm rather convinced that the only people making real money with credit card fraud are the shops, hackers stealing the cards and reshipping services.
The biggest buyers on the shops seemed to be criminal gangs engaging in relatively small-scale fraud maybe moving hundreds of thousands a month.
I don’t follow. Why is it the size or structure of what’s being encrypted that matters for brute force time rather than the key space of the encryption algorithm.
Password hashing is different because there is no key space.
Card information has no value because it is very easy to reissue a card. Things like name or phone number, or address, cannot be changed so easily and have much higher value.
Card information is literally cash money. It's volatile and subject to being rendered defunct, but it still has a lot of value if you can buy a bunch of prepaid gift cards with them before it's deactivated.
Imagine you could make payments with a salted and hashed credit card number.
Except that doesn't really make anything better, because now an attacker could simply use that salted and hashed credit card number elsewhere to make payments too!
The real solution is to use something like OAuth for payments. You authorize a merchant to take ongoing payments from you, and the card issuer gives the merchant a token which is only useful for making payments from you to them, and can't be used to make payments to anyone else.
Apple pay uses a card number per merchant, but that card number could still be used at any other merchant. Eventually that loophole could be closed, but payment systems move like molasses... Expect it to take 20+ years...
It simply means if the card number is used for fraud, it'll be easier to track down who leaked the card numbers.
Whilst you can't hash it like a password as you need to be able to transfer it onwards, modern systems make tokenisation possible, where you immediately pass the details to the card processor, don't store them at all, and get back a token that can be used for that transaction.
It's like you need to keep a running diary of every single service you've used / every single place you've been so when something happens like this, maybe you can find out if you actually used that service or visited that place.
I think I stayed at a Starwood 2 years ago in PA? But I don't remember if it was a Starwood or some other Marriott brand.
Does it really matter at this point? It's safe to assume all your data has been compromised. Given the state level IRS hacks that have happened in quite a few states, the OPM hack, and the hundreds of business hacks no one is safe anymore. Everyone should operate under the assumption their information is known and do things like freeze their credit and keep an eye out for the data being used.
Sure it does. Just because they "apologized" doesn't mean it should be acceptable or dismissable.
These breaches keep happening and these companies continue to be not held accountable or not punished in anyway, except for bad press for 2-3 days until everyone forgets and it's on to the next security breach.
You want me to shop at your store or use your services, you want me to join your mailing list or give you my address, CC and phone # - I expect at the least, for that information to be kept secure.
I agree companies should be punished and held accountable. As an aside, how does one go about punishing the government though?
The point I was making is that everyone should assume their data is already compromised. The weakest link will always be an issue, and in many cases it is one you cannot control - the government.
Us in the IT field - programmers, sys admins, whatever - yes, we probably should assume our data is insecure because we're in it. We see these breaches here on HN and we know what to expect when they happen.
As everyone always says about these types of things. We are not the target market so to speak.
Joe user doesn't know to read HN and gets their news from the TV IF they happened to be watching when it was covered.
I'm agreeing with you. I know we should know to be careful. But it's still not acceptable.
This was one of the unexpected perks of moving to FastMail, which has been an awesome experience so far. I'd highly recommend it.
You can do something similar in Gmail with `name+label@gmail.com`, but a shocking number of sites ban valid characters from email addresses (intentionally?). That and a lot of clients have super annoying alias settings, so replying to such an email is a royal pain.
This is why I do my upmost to avoid entering payment details on as many sites as I can.
Unique passwords (and usernames too if that is an option) are easy to manage via lastpass et al. Unique email addresses are harder but you might be able to fudge something using the "+ label" feature. But the real challenge is payment details. I'd be quite happy using Paypal everywhere if that were possible as then I'd only need to worry about Paypal getting hacked.
What I really don't like is shopping sites that require me to enter my payment details (or worse: require me to save payment details). I avoid those places in almost all cases.
I use privacy.com to generate a unique credit cards for most purchases/services. These cards are by restricted to the same merchant, and can have a max limit added. It's the same idea as generating unique passwords with 1Password et al.
For people whose credit cards don’t hold them liable for fraudulent charges (everyone I know of in the US), I feel like it’s a waste of time. Check your statements monthly and/or setup push notifications and if your account # is compromised, call the bank, dispute the charges and have them mail you a new card.
Getting a card cancelled is a massive inconvenience - not least of all because you're without a card for the short term - but then you need to update your payment details on every service you use (that isn't a direct debit). Having one place to enter your details is way more convenient.
Plus you get the bonus of having an auditable trace (ie via unique virtual cards) of who made what payment. This would be invaluable if you then want to contact the business that compromised your details. For example in some instances they might not even be aware that they've been hacked - eg if they're running an off the shell solution like Magento but not kept up with security updates.
So there are still some benefits to the aforementioned service even aside the topic of liability.
Also I can stay a bit more anonymous to sellers - when checking out with a virtual card, I can use any name or billing address and it will be accepted. I think 90% of online businesses do not need to have my real name and address in their database (for example SaaS services, digital goods, HBO/Netflix/WSJ subscriptions etc.)
> What I really don't like is shopping sites that require me to enter my payment details
Would this soften the blow if they used an indirect payment system like PayPal instead of directly entering card information? Otherwise how else could you shop online?
> Would this soften the blow if they used an indirect payment system like PayPal instead of directly entering card information? Otherwise how else could you shop online?
I did say "I'd be quite happy using Paypal everywhere if that were possible". Or am I not understanding your question?
> Sites actually do this?
It used to be common years ago but few seem to these days, thankfully. I have still encountered the odd site that does though (or at least not made it clear that they do not store those details).
also using unique passwords as well as aliasing your email address such as gammateam+starwood@gmail.com helps you isolate which breach comes from where. the spam emails you get will start having the alias on it
History being a guide, Marriott will obtain a contract with some financial security monitoring service, and offer a ~12 month free period for affected consumers. I've always thought these products are b.s. The advertising is scammy. I'm betting they leak even more data, like your purchase histories.
Does anyone know of the efficacy of these monitoring services? If they were really even slightly more effective than even odds, I would say that consumer protection laws should require free monitoring for a longer period, say 24 months or even 36 months. Ironically though, proper monitoring means sharing all of this same personal information with a 3rd party, and then some.
I also wonder if it's just more effective to take advantage of the free credit report freezing feature, since that doesn't require me to share even more personal information with a 3rd party; and actually restricts access to personal information instead of expanding it.
In addition to this, as per GDPR, the personal information (name, passport, address, email, phone) should have been encrypted in the database, just as the payment details were. This would have reduced the effect of unauthorized database access.
I'm guessing this was not done either, even for EU guests.
Ah, true. I hadn’t internalised that you only have to announce it publically in cases with “a high risk of adversely affecting individuals’ rights and freedoms”.
AFAICT companies can still hold all your information /if you consent/ but can't withold service if you don't consent unless they can prove that information is required for provision of the service. So, can they show passport number is required in my jurisdiction, if not then I can have a room without providing it.
I recently got added to Starwood Preferred Guest. I still don't know why they have my e-mail (don't seem to have stayed at any of their hotels), but I guess it's out there now, even though it wasn't in HIBP before.
If they didn't keep the data from long ago, then the damage would be much smaller. Such companies definitely need some help from lawmakers. Companies shouldn't keep personal information for years.
A handful of years ago, Paul Ohm wrote about a concept he called the "Database of Ruin" [0]. I think about it every time one of these pieces of news breaks.
"Once we have created this database, it is unlikely we will ever be able to tear it apart."
I just signed in to Marriott.com see what info they have on me that was stolen, and was forced to change my password. It even required email verification, which is good.
Then when I tried to log in with my new password I was rejected, saying my account is 'under audit' for suspicious activity. God dammit.
Great massive fines, significant lawyer billing, some worthless credit monitoring service free for 6 months and screw the affected consumers.
Let's see Target is probably the most obvious parallel: $202 million in reported legal fees and other costs. $18 million to states (fines). $39 million to the financial institutions affected by the breach and a whopping $10 million for the consolidated class action lawsuit (along with the $6.75 million for plaintiffs’ attorneys fees and expenses).
Oh wait, Target annual profits are $20 billion? Never mind.
I'm thinking - they don't really expect you to call internationally to hang in a loop for an hour or so but they have calculated that a certain number of people will call internationally and hang in that loop whether or not any single individual is highly unlikely to do so.
Outside of the privacy problems (which, let's be honest, our data is already out on the web) if you changed your credit card since your last visit you're probably safe on that particular financial attack vector, right?
So I have to get a new passport, get a new phone number, get a new credit card, change my email address... in the US, can I sue in small claims court to recover the costs of doing these things?
It said some records also included encrypted payment
card information, but it could not rule out the
possibility that the encryption keys had also been
stolen.
Why did the world end up with a pull-system for payments? Why do I have to give out my credit card number and enable the other side to pull arbitrary amounts as often as they like?
This is one of several things crypto currencies got right. You pay by pushing money to the other side.
The short answer is that credit cards were invented over 40 years ago, to operate in an offline and indeed paper-based system (hence the embossed digits for use in mechanical duplicators). For a more modern system look at contactless.
FWIW I think the "one shot push" model of cryptocurrencies has some serious limitations which I think should be addressed by an "invoice" system - e.g. you can easily pay the wrong person, or even an address which doesn't exist and is owned by nobody! Not to mention dumb errors such as swapping the payment and tip fields. It would be better if the payee crafted a cryptogram for "please pay me (authenticated address) the sum of X", and the payer was simply generating an approval of that.
It would be better if the payee crafted a cryptogram
for "please pay me (authenticated address) the sum of X"
Are you talking about offline? Because on a website I would expect the payer to simply copy&paste the address.
I would also expect a link type to evolve that browsers understand. Something like "payto:1fs8e...?amount=0.01&coin=btc" to pay 0.01 bitcoin to 1fs8e... Similar to the "mailto:user@host?subject=...&body=..." link type.
An when we talk about offline (paying for a restaurant or something) - isn't there a visual link type already? I am not a user of crypto. But I think I have seen barcodes or something used for this.
For Bitcoin, BIP 70 was proposed (and quite widely implemented) to take care of a lot of these issues. I assume many other cash-style cryptocurrencies have something similar these days.
Addresses contain a 4 byte checksum so it's very unlikely to typo in a valid address. Also the lightning network (2nd layer over bitcoin) works exclusively by invoicing as you described.
I don't think crypto currencies really deserve much credit for this. It kind of already exists.
I have authorized bank withdrawals for paying some bills. This is the "pull" and I don't really feel all that comfortable with it. I once had an irritating issue where an no-longer-authorized withdrawal didn't stop properly.
But I also have scheduled payments, the "push" method, where my bank account will routinely send a configured amount of money to a configured account/company at a configured interval. I like this because of the control and responsibility it gives me.
Something I wish Canadian banks had was a better API for allowing me to say, "instead of paying $x each month, query the bill and pay the exact sum as long as it doesn't exceed $y".
I did not say it was invented by crypto currencies.
Banks have not managed to implement a payment process for exommerce yet. You have to log in, fill out a form, do some kind of verification, wait for a day or more... yuck! That's fine for big transfers that only occur rarely. For a quick online payment it sucks. That's why nobody is using it for this use case.
Most European countries have near-immediate transfers of "low" amounts (e.g. up to £25,000 for my British account).
Several European countries have an easy online payment process built on top of this.
Sweden has Swish, Denmark has MobilePay, the UK has created PayM (but with little use so far), Finland and Norway have similar systems. These are mobile apps commonly used for transferring money between friends/colleagues, or paying for things from very small businesses or casual transactions. They can also be used online.
There's an example flow with screenshots in the API documentation [1 PDF] for Swish on page 7. After selecting Swish at checkout, a message is sent to the phone app; the user opens it to authorize the transaction.
(And to the GP post: the automatic payments systems in Europe come with strong guarantees. The UK one: "If an error is made in the payment of your Direct Debit, by the organisation or your bank or building society, you are entitled to a full and immediate refund of the amount paid from your bank or building society" and "You can cancel a Direct Debit at any time by simply contacting your bank or building society." — the latter is implemented with a "Cancel" button against each authorized company in my online banking.)
Have to say, last time my card was replaced for fraud, it was easy. Bank noticed it and called me, cancelled my card, sent a new one (had it 2 days later), and I had to update the payment information on a handful of websites/services (amazon, google pay, paypal covered most of my bases).
It had the nice side effect of emailing me telling me that pyament for X service had been declined, and made me think twice about renewing it (netflix)
> Also, what if you had currently been somewhere else around the globe. How would you gotten the new card?
I guess that may have been a concern, but it wasn't for me. We could come up with any number of scenarios I'm sure, but in reality travelling with a second card covers 99% of the eventualities.
Someone had fraudulently purchased a 23andMe kit on my card (weird, right?) and I only caught it because my account was abnormally low (~250 CAD). I phoned the bank right away and they cancelled the card number, put my account on watch, reversed the charges and I was able to go into the bank and get a new card on the spot. (VISA Debit, not a CC)
* spend the next 6 months updating autopay stuff that for some reason keeps working for 5 months after the card number changes, but then fails in month 6.
That's interesting, I wonder which bank you use that would require a phone call and paperwork when a card is compromised.
It happened to my CC a year or so ago, and my wife just went through that with her debit card 2 weeks ago actually. In both cases, the bank actually alerted us about weird transactions, immediately canceled them and asked us to review online. We indicated that they weren't ours, the bank canceled our cards and sent replacements.
Now, updating services is indeed a pain. I have a policy of never saving CC information on websites (even though it would make "my shopping more convenient next time" as they say). I do that for safety reason, even though I have close to no illusion that they probably still keep it in their DB even when I decline. So for me it didn't really disturb my workflow when paying bills and all.
Why would you need to fill out paperwork? This seems to strictly be a problem with your financial institution and not with fraud.
>* Wait for a new card to arrive
I had my card replaced today because of fraud, my current card will work until I activate the new card which was overnighted (internationally, even) and should arrive monday.
I don't think that's a big issue.
> * Go through all services I use the card for and change it
All the services which I have subscriptions with seem to automatically update when I get a new card.
If the process of replacing your card is a hassle, your financial institution is fucking you over. It has nothing to do with fraud.
I guess we've established that the problem isn't really fraud, but financial institutions deliberately making it difficult for consumers to deal with fraud in an effort to shift blame away from their own insecure payment systems?
I don't understand your question are you serious? Even if my money is completely protected from fraud, if I have to spend one second dealing with some the problems from such fraud and the stress it could cause I would never want my credit card numbers compromised.
Why are you asking what you think are rhetorical questions? Because the fix is not trouble-free on my end. I assume that you’ve never had the experience if you’re asking.
Most of my bills are automatically paid on the card, so I had to update my payment information on ~20 sites. At 2-3 minutes per site (optimistic really), plus the fact that one site requires I fax in the request, means it easily took me over an hour.
30 seconds? Damn, I guess they finally got teleportation working. Replacing the card and waiting a week wasn’t what I was referring to, and I suspect you know that. But when you have a narrative to double-down on...
I would usually agree, but near as I can tell the vast majority of the top-level comments in this thread have a negative score and I find that surprising.
When you start using Ethereum and web 3.0, the solution to data security becomes quite clear. I'm sure I'm gonna get roasted here for even mentioning this, but you'll all eventually come around.
Getting constantly hacked from all angles. FB AMZN GOOG AAPL are always listening to us thru devices. Mega corps build systems that leave sensitive data exposed to the public web. I'm curious to see what actually caused this hack. I wonder if they used mongo db with default settings lol.
I think the vast majority of these hacks come from some random office working clicking on a doc in an email and opening it up in a microsoft app, on microsoft windows. we've never really blocked that up yet. I don't see it coming because someone hacked the backend of apple, google or amazon. who knows about facebook.
A breach like this should mean that Marriott should immediately be in bankruptcy, since the potential damage to any individual customer is well above multiple thousands of dollars.
I wish some of these giant companies would see some real consequences for their lax security practices.
I find this to be justification in carrying a fake ID and issuing a credit card from my corporate line of credit in that cover name for use when renting hotel rooms.
Every time a private organization demands your government ID to do business, assume that this will happen eventually. Airlines and hotels immediately come to mind, but I am sure there are lots of others. I'm not sure that air travel with a fake ID is viable due to Secure Flight, however. Also, as rental car insurance interfaces (potentially) with a police report/government ID, I am not sure I will rent cars in the future.
When it comes to places you regularly or habitually sleep, this could mean direct physical danger to you, depending on circumstance.
That people looking to extort, blackmail, or kidnap you or your family can now tail you from the hotels you usually/habitually frequent during conferences you consistently attend. It lets them predict your future location so that physical surveillance or ambush can be prepared.
I doubt that's going to be the result. I doubt that anyone is in any way that important. It's going to be financial / identity fraud, there's no reason for the thief to ever see your face.
Several thousand are so important that one may want to physically stalk them, but not know their home address / travel habits until this particular leak? I doubt it.
It is relatively simple to maintain a “home address” in a different place from where you and your children sleep. It is not the case when you stay in a hotel.
"Let's immediately set up a separate domain name that looks like ours" remains one of the weirdest antipatterns in incident response.
[1] http://news.marriott.com/2018/11/marriott-announces-starwood...
[2] https://info.starwoodhotels.com/