Hacker News new | past | comments | ask | show | jobs | submit login
Equifax security freeze PINs are the timestamp of when you request the freeze (twitter.com/webster)
471 points by moonka on Sept 9, 2017 | hide | past | favorite | 187 comments



True thing: until recently you could remove hard inquiries from your credit report merely by pulling your own credit so often in one month using an array of daily monitoring services that you would overflow the field and bump off legit inquiries.

I did this in 2009-10, it had been going on for a while, and lasted for a while but sadly I hear they've solved it seemingly by nightly batch job to remove your own credit pulls.

These companies are just barely functional for their purpose.

Experian seemed to have their act together a bit more.


I recently turned down a job offer at Experian; it (at least their San Diego office) was a shitshow.


I used to work for Experian. When I left, they paid me an extra month's wages before their payroll caught up to the fact that I was gone, then had to ask for it back. (I paid them back, FWIW.) If you can't run your own payroll, why should you be trusted with a credit bureau?


You should see Denver. It was like The Shining.


You seem to be surprised. Ever work at a bank or gargantuan investment company? Same deal.

"Look at how incompetent they are" is more like "Look at how our industry is." They're a representative sample.


Ah, bumping.

The kids on flyertalk were all over this.


How many requests, specifically, did you have to have to overflow the field?


I'm guessing one of 128, 256, 32768 or 65536


75-100 in a month iirc


This is embarrassing at this point; a credit authority printing dividends is too busy placating shareholders to even pretend to give a shit about the data of the people who _involuntarily_ have their PII stored on their platform.

Whoever files a class action should make a motion such that anyone can purge their PII from a credit authority that's experienced a public hack such that their PII was exposed, or some other sort of incentive for these too-big-to-improve companies to do their job


You're surprised that large companies are incompetent?

My experience has been that the main product of most companies is management politics. Actually shipping product is nearly irrelevant to everyone's daily activities. In some cases, people get punished for being competent.

One company I worked with made it clear they had no interest in listening to competent people. People were promoted for their ability to suck up to management. They got promoted when the projects they managed were delayed, buggy, and generally non-functional. Any competent engineer was summarily drummed out of the company for causing trouble.


Suddenly, I'm feeling like it's time to re-read Catch-22.


By Joseph Heller in 1961? https://amzn.com/dp/B0048WQDIE/ e-book $11.99


I worked here


And in the absence of legislative action the only thing we can do in the meantime is go after Equifax's data sources and customers. I know that Citibank uses Equifax for providing FICO scores to their cardholders. Voicing your concern to banks like Citi and threatening to close your accounts if their relationship with Equifax isn't terminated can be effective if a big enough percentage of Citi's customers complain. An interesting aside, Mint announced on the 6th of September that they were updating their FICO score service to use TransUnion. They had previously used Equifax. That's either an incredible coincedence or they knew about the breach before anyone else and switched providers.


What legislative action could be done? Require companies whose systems have a large impact on peoples lives hire licensed, certified software engineers? There is no such thing. Require them to follow industry standard practices? There is no such thing. Create new regulations governing the manner in which business management addresses concerns raised by developers? There is no such regulatory body.

You can't claim negligence of following industry standard practices when there ARE no industry standard practices. The closest we have in the software field is the work done by NASA on creating legitimately safe code. But companies don't want to follow those sorts of guidelines because they make software development slow and expensive. Sure software development is the primary driver of their businesses existence no matter what industry they are in, but they feel entitled to it being cheap and fast.


Except that there is precedence for these types of standards and laws in other industries:

PCI [0] is an industry standard which is mandated in order to maintain good standing in the payments industry and HIPAA [1] is US legislation which governs the handling of patient health data.

The issue we face is that there is no equivalent for either of these in relation to handling of PII and identity data.

[0] https://www.pcisecuritystandards.org/

[1] https://www.hhs.gov/hipaa/index.html


>What legislative action could be done? Require companies whose systems have a large impact on peoples lives hire licensed, certified software engineers? There is no such thing. Require them to follow industry standard practices? There is no such thing. Create new regulations governing the manner in which business management addresses concerns raised by developers? There is no such regulatory body.

Why there're standards for cars, but no standards for computer systems? I think it's possible to create them. If there're standards, it's easy to define malpractice.


I agree entirely. It's just a matter of there not being any standards yet. I would hope that eventually we can all agree that even though such standards will never be perfect, and their establishment will be contentious, we need to do it for the overall benefit of society. It will mean licensed software engineers are more expensive to companies, companies will be required to respect those engineers and treat them like competent experts rather than functionaries, and even give the engineers the ability to grind the business to a halt if they point out fundamental engineering problems with the system. Companies will hate that. Some of the practices made standard will seem boring, over-cautious, etc to some engineers and some will not be able to pass whatever tests are put in place and engineers will hate that. The licensing itself will probably end up being a way for some functionary body to enrich itself while providing dubious value as is the case with many of the existing engineering licensing bodies. But, despite all that, the overall social benefits would outweigh the negatives. And failure to accept those negatives will leave us in an even worse position.


Here're examples of things that could be included in such standards:

* Passwords should be stored only in salted and hashed form

* Code injection attacks shouldn't be possible

* Personal data should be stored in anonymized form with mapping between real and virtual id stored separately

* Only cryptographic algorithms from the approved list might be used (no MD5)


What could be done? Pass a law that you can not store any data about a person without their explicit consent for each type of data. No blanket opt-in, no shady changes to TOS.


Or Equifax became unresponsive while they investigated, but didn't reveal what was happening. Related, but not via inside info about the hacks.


>purge their PII from a credit authority

I can't see that happening if they do any kind of offsite back up and archiving. They will purge you from the current master, say they purged you, and you'll be none-the-wiser.


The best solution for this would be to enable similar data protection laws like the ones that will become active in 2018 in the EU.

A breach of this law would cost a company 2-4% of their revenue as a fine. Seeing how these big companies operate there would be a lot of breaches.


>The best solution for this would be to enable similar data protection laws like the ones that will become active in 2018 in the EU.

I like GDPR, but a lot of people claim it to be too draconian. We'll see how it works out in EU.


I wish someone would go Mr Robot and corrupt their offsite tape backups with the HVAC system.


This is a unix system! I know this!


They could encrypt each person's data with a unique key.

Then, purging a person's data would come down to deleting that key from the system and from all backups of the keys.

That makes it a bit easier; the set of all keys will typically be a few orders of magnitude smaller than the data, and could be backed up using separate systems. Those systems wouldn't have to be updated often and access could be better controlled.

You would still need procedures checking nobody writes out non-encrypted data (including database keys), but that's doable; a first level scan would just run strings on your raw disks.

Disadvantage is that this would affect performance, especially for reporting services (a query gathering statistics over your customers would have to fetch all your customers' decryption keys)

A step up would be to hand out not bare decryption keys, but pairs (decryption key, expiration time stamp) encrypted with a private key that only your database knows the matching public key of. That allows your database to detect when your applications reuse decryption keys for too long. Depending on application architecture, that pair could even be a triple (decryption key, session key, expiration time stamp), and 'encryption' of course should use a salt.


> too-big-to-improve

Love that!


> _involuntarily_

This is probably part of the explanation.


It's time to have a mandatory certification for people who develop critical systems. After such certification, you can consider such an implementation a malpractice, and sue them for it (of course the penalty is paid by the insurance company which sold the malpractice insurance).

Doctors, lawyers, and many other professions have such system, why can't we have it as well?


"Critical systems" pretty vague, and could be used to describe any system that processes payments or other basic things we use.

It's fundamentally different from malpractice in my opinion. In health care malpractice has obvious pieces of data - we know who the doctor is, we know their credentials, we know what information they had and when they had it, we know what they decided, what they prescribed, what they said.

Software engineering is a team based endeavor. Who exactly is responsible for unrecognized vulnerabilities? Everyone? No one? One dude who everyone sorta thought handled security stuff? It's as clear as mud.


>Who exactly is responsible for unrecognized vulnerabilities? Everyone? No one? One dude who everyone sorta thought handled security stuff? It's as clear as mud.

Security team with people who do it full time. Betting your security on the one dude who sorta did everything should be criminal.

Aka, not this: http://i.imgur.com/a7S95nG.jpg


>Who exactly is responsible for unrecognized vulnerabilities? Everyone? No one?

Here's a quote from Equifax's early release on the breach [0]:

Equifax said that, it had hired a cybersecurity firm to conduct a review to determine the scale of the invasion.

So, to your question, I'm going with "no one", at least internally.

It's beyond belief (well, not really anymore); but, not only do they not have security covered internally (criminal in itself), but they don't even appear to have a regularly engaged cybersecurity firm. They had to go out and hire one post facto.

[0] https://investor.equifax.com/news-and-events/news/2017/09-07...


What does professional even mean (from her past)? To me it means useless middle management that accomplishes nothing apart from moving numbers around to make them look good.


I always thought "professional" as the sole job description (i.e. not "professional X") was used as an euphemism for "prostitute", so I'm wondering why someone would put it on their resume like that. Did I just learn the word in the wrong context?


What if management doesn't hire a security team? What if management hires incompetent security team?


>What if management doesn't hire a security team?

That's clearly negligence.

>What if management hires incompetent security team?

That's harder to do because you have to establish competence, which has led to a bunch of hazing rituals via whiteboard for general software development and a lot of other insecurities. Being a security professional isn't regulated by law, so you can't check the law to determine if someone's competent. So who's opinion do you trust, and why do you trust their competence? An expert witness, maybe?


>What if management doesn't hire a security team?

"That's clearly negligence."

Great so you just made it illegal or impossible to create a start up, congratulations.


>Great so you just made it illegal or impossible to create a start up, congratulations.

All of this is under the context you'd be handling a lot of PII or sensitive information, in which case, yes, I don't want just any start up to work with PII without some kind of security team.


If you're handling sufficiently private data, then there shouldn't be a low barrier to entry. Starting a medical startup without the requisite expertise would be negligent; I don't see why certain classes of private information should be different.


Real engineers have a system in place for this. It's called "Professional Engineer" and it's managed by NCEES. There is no possible reason that practice cannot directly apply to software engineering, except for the cultural refusal of software engineers to take responsibility for anything.


In fact, there has been a Software Engineering PE exam since 2013. It's not surprising that you don't hear a lot about it because most of the topics on the test would make the average CS student groan (requirements, maintenance, software development lifecycle, etc)

https://ncees.org/ncees-introduces-pe-exam-for-software-engi...

Exam specs: https://ncees.org/wp-content/uploads/2015/07/SWE-Apr-2013.pd...


I think one of the problems is that it's just not societally necessary for 95% of software. If a game is shitty or an order entry system crashes occasionally, nobody dies. Nobody really even cares. Normal social and market mechanisms mean most software at least approaches adequacy.

In at least some of the areas where we really care about software quality (e.g., banking, medical devices) there are existing regulators who will fuck your shit up if you don't take certain aspects of quality seriously. Which is good, but I think it's part of why we don't have an industry-wide program.

Maybe we should take a lesson from Hammurabi:

"If a builder build a house for some one, and does not construct it properly, and the house which he built fall in and kill its owner, then that builder shall be put to death. If it kill the son of the owner the son of that builder shall be put to death." [1]

The occasional execution would probably make people much more serious about unit testing.

[1] http://mcadams.posc.mu.edu/txt/ah/Assyria/Hammurabi.html#Ham...


That would be acceptable if we were talking about buildings, a blue collar job. But if you try to apply it to a white collar executive you're going to run into social resistance of a great magnitude. White collar crime is a social norm and only very rarely even lightly punished. It is, to a degree, expected. White collar crime kills more people and does much more economic damage every year compared to street crime, but our society has established as a norm treating street crime harshly while turning a blind eye to white collar crime. If the builders company gave the builder substandard materials to build with and refused to supply him with the tools needed or the time needed, few will get behind the idea of executing the executive who got his shareholders a 0.1% bump in profitability that quarter through those cuts, no matter who it killed.

Just look at Toyotas "unintended acceleration" case. If their firmware engineers had access to static analysis tools (a few grand for a license), the bug would have been pointed out to them immediately. Instead, Toyota hired inexperienced engineers, deprived them of appropriate tooling, and pushed the cars out to the marketplace where they killed people. The result? Toyota was cleared of any wrongdoing. They're computers. They're too complicated. No one can know how they work.


Yeah, far too much executive crime gets to a "who could have known" resolution when they certainly should have known. Or a "few bad apples" resolution when the system the executives designed created and rewarded bad apples.

I would love to see that change. Right now, though, we're in a big wave of "inequality is great", which I think strongly contributes to this problem. Let's hope that wave crashes, letting us start to hold executives and managers accountable.


I agree in part, but I think there are a few things about this scenario that highlight the problems with software. First is its extreme mutability: you can endlessly patch it, and often have to when vulnerabilities or flaws are discovered. Unfortunately this tends to lower the bar for a first release. Second, if you want to be cost-effective you must leverage many existing components of mostly unknown providence and quality. Finally the security aspect is extremely difficult because both the cost and risk of mounting an attack are extremely low.


> First is its extreme mutability: you can endlessly patch it, and often have to when vulnerabilities or flaws are discovered.

Sometimes, instead of patching, the software should be decommissioned. Search in the news for planes which were grounded when serious flaws are found.

> Second, if you want to be cost-effective you must leverage many existing components of mostly unknown providence and quality.

There're different components for different kinds of requirements. You won't use components for two story buildings, to build a skyscraper.

> Finally the security aspect is extremely difficult because both the cost and risk of mounting an attack are extremely low.

If the risks are high, systems shouldn't be deployed. There's a reason we don't allow people to have machine guns for self defense.


This concept is really not at all portable to software, especially security. It's a tempting analogy, but an invalid one.


No, it's not even an analogy. The precise methods and regulations are almost directly transferable. People are doing it. It works.

It just needs to be industry-wide.


No, it doesn't work at all.


While I agree, how do you apply software engineering practices in a field where a good chunk of the workforce doesn't have formal computer science education?


The same way real engineers work: classroom training in formal engineering, followed by years of experience under an accredited engineer in the field. There is testing at each transition to weed out the skaters.

Software engineers don't need to be computer scientists, in the same way civil engineers don't need to be materials scientists.

There is a bootstrap process, and even in other industries not all engineers are PEs... but all projects are reviewed and stamped by PEs.


Even if the whole workforce had formal computer science degrees, most of us still wouldn't have formal engineering education. The CS programs turn out computer scientists, not professional engineers.


And even if they did have formal engineering education and formal CS degrees and formal whatever most(all?) would still be incapable of writing/designing/implementing bulletproof code.


We have literally centuries of history in engineering in the physical sciences to use as an example.

This industry resists because it's filled with CS folks who either can't or won't believe that there is anything more to engineering than data structures and algorithms trivia.


The management who told the developers "we need this done by tomorrow, figure something out or it won't be good for you"


>The management who told the developers "we need this done by tomorrow, figure something out or it won't be good for you"

Imagine, that management tells their lawyers, we need to do it tomorrow, figure something out? Most likely, lawyers will either refuse to do the work, or will report the management to law enforcement.


truth!


Structural engineers have to deal with these sorts of issues. They do not build a bridge and say "this bridge is safe." They build it and say "this bridge will function within X, Y, and Z parameters for A number of years if maintained in this way" and similar things. They're dealing with a system which is known to not be totally invulnerable. They do it through comprehensive testing, scientific methods, and, above all, through trusting those technical concerns to the total exclusion of business goals. If it is 90% cheaper to use a weaker concrete, they do not substitute it in and cross their fingers. And if the CEO goes behind their back and does the substitution, or he refuses to provide them with the expensive physical simulation software necessary to do their job, or he ignores safety concerns raised by his engineers, that CEO goes to prison and the company is usually destroyed. This is starkly different from technology companies where suggesting such practices is basically asking them to completely restructure their entire organization fundamentally.


How does this work in civil engineering or construction in general. It is also a team based endeavor. The way I understand it only engineers or management needs to go through certification. Basically people who direct the project.


We certainly can have such a thing, and exactly that has been discussed for well over a decade (probably much longer, but I'm only so old) in the ACM, among other organizations. It's a difficult issue. Creating a set of standards and a certification process has a lot of pitfalls. Failing to create such a process has a lot of pitfalls. Knowingly choosing to step into one set of pitfalls over another is never a comfortable choice and people are generally very bad at it until something really, really bad happens that gives a large number of people enough irrational fear of one option to push them toward the other (and they will rabidly and aggressively oppose any discussion of acknowledging or compensating for the pitfalls they're moving toward in that case - they feel entitled to a 'clean' option and they will demand you pretend the option they're going for is that).

Rather than compare it to doctors, lawyers, etc, I would compare it to structural and civil engineers. Those are the sorts of regulations we require. If a CEO of a construction company ignores warnings given by one of his structural engineers while building a bridge, that CEO is held responsible for criminal negligence and he is put in a prison for a long time. The same needs to happen for technology company management who cut the development timeline, deprive developers of adequate tools and work environment, and who hire inexperienced development staff simply because they're cheap.

Would you like to drive across a bridge if you knew the company operated the way tech companies operate? Viewing their engineers as a cost center to be reduced, as little more than spoiled typists whose technical concerns are always viewed as unimportant in the face of business goals, crammed into office spaces proven by over 1000 studies to damage productivity, and constantly pressured to rush through everything in defiance of basic biological fact that human beings are not capable of extended periods of mental exertion especially in the face of constant interruption? Would it make you more or less confident in that bridge if there were court precedent for companies resulting in peoples deaths being let off without punishment with such practices? That's the situation we're in.


The sheer scale is incomparable. Comparing a doctor's mistake during a surgery doesn't quite compare to losing the data of 147 million.

"Critical systems" developers would need astronomically expensive insurance to even exist, and therefore prohibitively high salaries.

I personally believe there should be some measure of a corporate death penalty to emphasize the responsibility involved though.


>The sheer scale is incomparable. Comparing a doctor's mistake during a surgery doesn't quite compare to losing the data of 147 million.

Then, companies shouldn't have such a high concentration of risks in one place.

The problem wouldn't be such a disaster if just SSN wasn't enough to get a loan. For example, if we had a password in addition to SSN (stored in a hashed form), the problem would be much less severe.


If the risk to such a thing failing is so large, this seems a point against your argument. We MUST be able to reduce the risk of such systems failing, and do so provably. If we cannot reduce the cost, we must reduce the risk, such that the calculation makes the system affordable again.


Developers don't control budgets and deadlines at large companies, management does. So what does this "certified" individual do when he's given a project without resources allocated for proper security auditing? Does he intentionally get fired for refusing the assignment? That works if he has bountiful savings, no mortgage, no kids. Surely no unethical contracting company will pick up the job after he leaves...


If only there were more software jobs out there, then they wouldn't be hemmed in so intractably.


Finding another software job is insufficient. You need to find one that gives its employees the time and space they say need regardless of other competitive or financial pressures. Not so easy.


Nice red herring


> Developers don't control budgets and deadlines at large companies, management does.

True, but that's why good organizations almost always have technical people on the management team who advocate for the technical arm of the company and ensure that it is appropriately resourced.


Seems very beside the point. Equifax obviously isn't a good company.


How does it work with lawyers or engineers?


Engineers say "no, we can't build it that way" and people respect it because they know that engineer: knows their work and has recourse through their professional society as well as government oversight agencies if undue management pressure compromises safety. I don't see it playing out the same way for most software teams.


As licensed professionals, they work under the knowledge that there are practices and outcomes that can cause them to lose their licenses.


Once again, may be it is time for such a thing to exist for software engineers who aren't web devs.


It's been suggested, but AFAIK some find fundamental aspects of software development to make it hard if not impossible for it to ever be True Engineering. I'm pulling this out of my back pocket on a Saturday evening, so Google for more, but there are arguments on both sides that go back some years. Heck, google "is software development engineering" and you'll find long Quora threads on the topic.


"who aren't web devs"? Why throw that in? This breach occurred through a hacked website, as do so many others.


They will simply refuse to do the work. It's better to lose a job, than to lose a license.


That doesn't really seem comparable.


Why not? The proposal was to license software engineers (at least for "critical" systems) the same way as those.


I'm not a lawyer, but it doesn't seem to me like they work in the same way that software developers do, with management breathing down their throats to do things in less time and telling them to cut this or that corner. If anything is seems more like the opposite with the lawyer directing a lot of his staff to do legwork.

If anything I'd think accounting is a better fit, but even then, I can't say I've met many people in the software industry who think very highly of any sort of existing test or certification; why would this one do so much better at measuring actual skill?


something something software developers need a union something


The best and most legendary developers still make mistakes that cause this kind of thing. And certified or not, all developers run into deadlines that result in production of imperfect code (and that word is key -- code has to be perfect ... always ... if that's your only defense).

While writing secure code using best practices is a big part of the security equation, any company that stops there will be pwned. The most oft-used illustration in securing systems is "the onion". You need to layer on protections, from firewalls (both traditional and application layer), to solid access control to simply making sure that systems are configured properly (just hit up any SSL testing site and pop in just about every non-tech business' main web page - might as well just make the result page static with a big letter "F"). Heck, even technologies like ASLR/DEP are an extra layer.

The goal is to make an attacker have more than a few hurdles to hop[0] in order to breach and to ensure that if you are breached, the value of what is exfiltrated is either worthless (i.e. properly hashed passwords) or detected and stopped before it's all gone (partial breaches aren't awesome, but it's easier to explain 1% of your data being leaked than it is to explain all of it being leaked).

[0] I've always liked that sarcastic advice of "If you and your friend encounter a bear ... run ... you needn't out run the bear, you only need outrun your friend". If you make things difficult enough, your attacker might move on to another target. And hopefully, if they succeeded in breaching part of the defenses, someone will discover it before they return and shore up what wasn't "perfect".


That's not what's needed. I guarantee you if the companies responsible for these leaks and their executives faced real penalties with teeth things would get better, licensure scheme or no.


Like PCI compliance?

Ask people who've gone through that process how rigorous it is...


It's rigorous, but in all the wrong ways.

At $DAY_JOB our security falls into two buckets (1) PCI and (2) stuff that keeps us secure.

IDK if it's possible to have a widely accepted security standard that isn't checking nonsensical and out-of-date boxes.


Well, it could be worse:

1) Stuff that keeps you insecure (a.k.a ISO 27001 ISMS stuff) 2) Stuff that somewhat helps, but is covered by fluff (a.k.a. PCI-DSS) 3) Stuff that actually keeps you secure.

PCI-DSS at least gives you a sledgehammer to convince lazy low-level managers to dump ciphers like RC4 and encrypt some of their data. It's not utopia, and it does err on the side of perpetuating banks' infatuation with 3DES too much, but I saw getting good stuff done by using it as an excuse.

You still need to have a competent security team of course, but it helps them not being ignored. Well, sometimes.


PCI is why we have weak passwords that we have to reinvent every 90 days.


I'm not a big fan of most tech certification efforts. They mainly document attendance at classes and/or ability to regurgitate trivia. They also tend to reward formality instead of quality, and slow progress. Consider, for example, if we'd had this sort of certification 10 years ago. It'd likely be filled with waterfall-style process requirements. Those don't actually increase safety; they just look impressive.

I'd be happier to see licensing and accountability. But it would have to have significant teeth. E.g., companies can't build systems of type X without somebody licensed. If there are problems with the system, then the person with the license faces personal fines and risk of suspension or loss of license. That would be less bad than certification, but it could still substantially slow industry progress if the licensing review board had a conservative tilt to it.

Of course, the real problem with most places is not engineers not knowing. It's with managers who push for things to happen despite what engineers advise. Licensing could sort of fix that, in that it could force engineers to act like professionals and refuse to do negligent work. But it still lets shitty managers off the hook.

So what I'd really like to see is a regulatory apparatus for PII. In the same way that the EPA comes after you for a toxics spill, an agency would come after you for a data spill. They investigate, they report, they impose massive fines when they think it's warranted. And when they do ream a company for negligent management of data, executives at all the peer companies get scared and listen to the engineers for a while.


I may agree, but Equifax is by no reasonable definition "a critical system".


There are levels of "criticality". Credit monitoring is in a different category from, say, spacecraft or nuclear power plants. But if a security breach can lead easily to identity theft, and therefore to a catastrophic compromise of one's financial stability, that should require some higher level of mandatory certification.


I don't know what definition of "critical" we're going with, but the fact that they have the SSN of basically every American makes them an important weak point.


It is really time to move past SSNs


Tell that to people who are denied loans thanks to credit reporting agencies.


Maybe it is, but that's also just a way of trying to push the costs onto the low-level employees. the attitudinal problem here is with the senior management and shareholders, who are the actual responsible parties. If they really cared they'd have established such a certification or announced an intention to adhere to some existing standard.



And the hits just keep on coming...

www.equifaxsecurity2017.com uses an invalid security certificate.

The certificate is not trusted because the issuer certificate is unknown. The server might not be sending the appropriate intermediate certificates. An additional root certificate may need to be imported.

Error code: SEC_ERROR_UNKNOWN_ISSUER


This is such an awful domain to use in the first place. It's conditioning users in exactly the wrong way, aside from there being a security warning for some users.

How do you explain to your father/grandfather/whoever that equifaxsecurity2017.com is ok, but equifax-security-breach.com, checkyourequifaxaccount.com, equifaxsecurity-2017.com and equifaxsecurity2018.com are not legit?

Stick to your top level domain. Something like security2017.equifax.com or equifax.com/security2017 would be okay.


This is how every class action lawsuits do their websites too and it blows my mind! It looks so sketchy.


Bear in mind that in general, those class action settlement websites are run by the lawyers behind the class action, not by the company that was sued and has agreed to settle.

It is in the best interests of the settling company to keep distance between those sites and their primary domain.


The cert is signed by GeoTrust and works perfectly fine on my Chrome on Windows 10.

[Edit: Ah, the chain is incomplete, see https://www.ssllabs.com/ssltest/analyze.html?d=www.equifaxse...]



I was surprised they didn't even use a link on their main equifax domain, but set up a new one that could just as well been any phisher.


Something worth taking into consideration is these companies are not Engineering/Tech companies at the core. They were probably born as paper-companies and digitized their operations later on. I am hoping for the day something and more appropriate for this age will make them irrelevant.


> I am hoping for the day something and more appropriate for this age will make them irrelevant.

We have the technology to build vastly superior replacements right now. It's mostly network effect requirements that make this extremely challenging/slow to implement.

An example of something we could do is cryptographically authenticated web-of-trust creditworthiness estimation, with techniques like proof of burn and selective trust anchoring used to establish terminal nodes in the unrolled trust DAG. This sort of thing would allow for pseudonymous, automated determination of trust without the extreme security and privacy risks posed by centralized identity stores like the credit bureaus.


I find technical discussions around this subject extremely fascinating but I'm a total noob in this space. Would you mind sharing any relevant links on the topics you mentioned?


> are not Engineering/Tech companies

Which means each and every line of code was written by the lowest bidder.


Or CEO/CTO's buddy's company


The CSO apparently graduated with a music major. Not that it should it disqualify them, many in tech didn't graduate with a CS degree but in light of the incident one has to wonder.


I don't think that is relevant at all. There are plenty of incompetent CS graduates, and plenty of highly competent non-CS graduates.


In general it would not be, now that the personal data of 100+ millions of people have been stolen from these clowns, it seems relevant.

Leadership sets the priorities, and expectations. They get paid disproportionately more than other employees and I think they should be scrutinized and bear responsibility correspondingly.

But I have no doubt they probably found someone lower in the ranks as a scapegoat.

"Joe was in charge of patches. And we are all equally disturbed and horrified by his behavior. But we've reached out to him and let him go. Now give us more of your personal information so you can get free credit monitoring for 6 months [+]. -Sincerely and with deeper regrets, the Executive Team [++]"

[+] (fine print) then charged as $49.99 a month until cancelled. To cancel please visit one of the 3 Equifax location in person on the first Wednesday of the month. Accepting

[++] (even finer print) by accepting the free credit monitoring you agree to binding arbitration and forfeit your rights to participate in a class action suit against Equifax and its subsidiaries.


I'm not saying we shouldn't have serious questions about his competence after this breach. Rather, my point is that we should be questioning his competence (and that of the rest of the executive team's) due to this breach, not his credentials.

If he had a CS degree, that wouldn't make him any less responsible for this massive data leak.


Usually, the larger the company, the deeper processes go, shielding it from individual incompetence (so the company can hire for easy to measure attributes, like compensation, and protect itself from difficult metrics, like technical competence). Unfortunately, processes also prevent individual competence to have a noticeable impact on the company.

If I got the story right, this bug was present for the last 9 years and patched upstream a couple days before the leak. Some measures could have prevented its exploitation or reduced its impact, like throttling by IP, one-time session keys and so on - and should be in place for any serious application - but it's entirely possible they had fixed schedule for patches and mis-evaluated this flaw as non-critical.

A LOT of companies carry obsolete dependencies for a long time.


That's where they learned to play the tiniest violin in the world. I sincerely hope that person has at least the dignity to resign.


That's very common at all non-tech institutions: the top technology management positions are held by people who don't have a technical background and may know very little.

Reasons are:

• It's hard to find technically skilled people who want to spend all day doing management tasks.

• It's easy to find essentially unskilled people who do want to spend all day doing management tasks.

• There is a large set of unwritten rules and social expectations that the people who created and run such companies use as proxies for competence. Do you dress nice, can you play an enjoyable game of golf, are you married, how old are you, etc. These proxies invariably de-select the kinds of people who have a deep understanding of their field (i.e. single young men who are able to devote enormous hours to their craft).

Edit: note the recommendations in her LinkedIn page. Every single one talks about her collaboration and communication skills, not a single mention anywhere of technical skills. It's tempting to shoot "Susan M" here but the real issue is a boardroom culture in which management is seen as a skill entirely divorced from the effort being managed.


So sad, that is so true these days that it shouldn't even be funny.

A meritocracy is not born when rich corporations (buyers of labor) select vendors (sellers of labor) based on personal connections and not ability to do the job


I'm reasonably sure that "tech" companies try to save costs just as much as "traditional" companies. The difference might only be that they how to formulate their requirements better.


I am 100% certain that "tech" companies have an entirely different attitude towards engineering costs than non-tech companies. The former want the best people working on problems and will pay what it takes (within reason), the latter want problems solved for the lowest price.

This is extraordinarily evident in the distribution of engineer salaries.


That's a massive reach! Have you been spending too long inside the bubble? Define a "tech" company.


Which part is a massive reach? The fact that certain companies (which I choose to classify as "tech" companies) are willing to invest 2-3x as much into their technical employees?

A "tech" company is a company for whom technology (ie. developers) is a profit center rather than a cost center.


How does Equifax, a private company, have the rights to access my personal data in the first place? Who exactly is giving it to them without my explicit consent, and why?


You do give your consent. Everytime you deal with a financial, or credit issuing institution.


Of course, effectively, you don't have a choice - you need those financial institutions to live a normal life. Equifax (and other bureaus) are coercive monopolies.

I hope that this story will bring down the hammer on their heads - not just Equifax, all of them.


Every financial institution you deal with gives them your info, and they do it so collectively they all have lower risk on loans. I suspect if this wasn't in place, we'd be paying significantly higher interest rates on loans.


You don't own your own data, unfortunately. At least not in the US.


I mean...you consent when you hand your details over to a financial institution. How credit reporting agencies end up with your information is very straight-forward. Save the alarmist rhetoric for the headlines.


> “There’s nothing in any statute or anything else that allows you to ask Equifax to remove your data or have all your data disappear if you say you no longer trust it,” said John Ulzheimer, a consumer credit expert

https://www.nytimes.com/2017/09/08/technology/seriously-equi...

That sounds to me like I don't own my own data.


The companies on the other side of your transactions.


Even all-paper companies back in the day had ops.


This sounds like blockchain could have application. But honestly, other countries do fine without them, IMO they should just be abolished.


A blockchain is public. You want everybody's credit public?


The data isn't public if it's in encrypted form. When you apply for credit, you'd decrypt it with your own private key and re-encrypt it with the creditors private key and send it to the creditor, they would then decrypt privately, process the application, and discard the data.


> you'd decrypt it with your own private key and re-encrypt it with the creditors private key and send it to the creditor

Creditor's public key, not private key.


Yes, goofed on that one.


No, blockchains are magic technology that makes money appear out of nowhere and hides all activity from evildoers.


Well, ethereum doesn't exactly have that good of a security record.


I am curious what programmer would make such a choice vs. calling random or asking the user.

However, while undeniably stupid, hopefully they have rate limiting in place so guessing the PIN would not be feasible even if you know the day the credit freeze was put into place.


Do you really think a company that has visibly demonstrated this much incompetence thus far would be making smart decisions where you can't see them?


Someone probably read an article about how RNGs aren't truly random, and so decided timestamp (which never repeats!) was the right alternative.


Of course, two people who happen to request at the same time will get the same key anyway, so it doesn't even solve that problem.


Right, of course not. They're idiots.


They don't want to migrate their db to add an extra field?


Or add a new table with this field.


maybe there was no way to save and transfer the pin from one place of the system to another, and the only way to do it is to guess on the other side by timestamp.

pretty hacky implementation but oh well


Hash with a secret key (a pepper) solves that. Heck, hashing with a salt or even just plain hashing would be miles better than this.


Hash? I wouldn't trust the folks at Equifax to know the difference between a one-way function, a cannabis concentrate, and a fried potato dish.


Carrying around the password wasn't a problem.

There's an option to pick your own password during the previous step but having an automated one is the default option, so a lot of people miss it.


If you have a 1 in 60*24 chance of guessing correctly then after 1000 guesses (potentially against different people) you have a 50 chance of being correct on one.


It's probably even less that 1 in 60 * 24, because usually a freezes is requested during business hours? So 1 in 60 * 8?


Could you explain your math here? Is there something you learn about the other digits when you make a wrong guess?


Your chance of being wrong in a guess is 1-1/(60* 24)

Your chance of being wrong in all 1000 guesses, assuming you guess randomly (not ensuring you never make the same guess twice) is (1-1/(60* 24))^1000

That's about .5

This is based on the assumption that you are guessing for a different person each time, so you can't increase your odds by eliminating any guesses you've already made. If you're guessing for the same person every time, you just need to guess half the possibilities. You can confirm this by realizing:

Your chance of being wrong in a guess is 1-1/(60* 24-x) where x is the number of guesses you have already made.

The product of 1-1/(60* 24-x) from x=0 to x=60* 24/2-1 is .5

Edit: hackernews formatting is weird


You could probably increase your odds significantly over time by analyzing the distribution of pins (and as others note, by making common sense observations you can also increase your odds)


60*24 = 1440 possibilities. If you guess half these numbers you would expect to crack half the pins.


I'd like if Equifax was just shut down and the assets redistributed to the affected parties. Shareholders have no incentive to hire ethical and competent managers if they don't have to bear the losses stemming from bad decisions.


Outsourcing/H1b costing a lot more than they save is my guess.

If you develop in-house software, you ARE A SOFTWARE COMPANY, whether you want to be or not.

Amazing how this good old boy network still thinks like it's 1970.


Serious question... if there are 3-5 attempt lock out, would this be any less secure than randomly generated number?


Yes -- if lot of people freeze their accounts on a day and the attacker has a way to try same pin on lot of accounts. (No -- if random number generator is only 2 digits long)


Very much so. If you can guess when someone froze their stuff to within a day, for example, then 5 tries gives you a 1 in 288 chance of getting in. A proper random number would be more like 1 in a trillion trillion trillion trillion trillion trillion trillion trillion.


But then you also have to guess their ssn and last name on top of guessing the correct day in order for it to be 1 in 288 chance, no?


Or look it up in the breached info.


1) Target individual for identity theft.

2) Search their twitter history for "I just froze my credit" or similar.

3) Try the PIN corresponding to t-1, t-2... t-5.

4) Profit!


Not sure I'd count on them having lock out, and even if they do, a shitload of the PINs will be easy to guess since they'll be clustered around now, when many people are rushing to lock their credit.


It's like the Keystone Cops are running Equifax. They are now a complete joke. And it's sadly not funny.


Anyone else confirmed this? Don't know who Tony is, usually like more sources that a tweet.


I worked with Tony a number of years ago and have been following him on social media. He does a lot of public records research, and for the last two+ years or so, he's been fighting the Hennepin Count Sheriffs office over public records requests about law enforcement's use of biometrics. As of April of this year, the Minnesota Court of Appeals ruled against Hennepin county. I'm not sure if the county is planning on appealing to the Minnesota Supreme Court, but based on their behavior so far, I'd bet on an appeal.

If you need more convincing, check out his blog at https://tonywebster.com/.

PS. Direct link to his post about the MN Court of Appeals outcome: https://tonywebster.com/2017/04/minnesota-court-of-appeals-d...


I can confirm at least the first 6 digits are MMDDYY based on the date that I saved it into 1password. The last 4 digits seem like could very likely be hhmm.

Edit: Can confirm: The pin is the exact date/time stamp that my freeze was applied. I'm able to tell based on another note saved to 1password. It is within 1 minute :(


Just try it out for yourself, like others did: https://news.ycombinator.com/item?id=15204573


I'm not using anything that Equifax set up in case it waives any of my rights. This is how bad it's gotten, people are afraid/untrusting of their security/protection measures.

Thank you for linking to more accounts of this.



I can confirm. The time is in EST. Now anyone who knows when your froze your credit, also knows your exact PIN.


At this point, I will be legit shocked if there aren't actual lines forming around courthouses, full of plaintiff-side lawyers trying to get a piece of this unbelievably stupid and negligent company. Holy shit.


OK, what is "Equifax security freeze"?


A security freeze (which you can get from all three credit reporting agencies for small fee, $10 or $20) prevents them from giving your credit information to a bank (or car company, or college loan agency, or...) asking about you, when you ask for a loan. I don't know if they charge for this service or not.

It also prevents them from selling your credit information to credit card companies (and, I'm sure, insurance agencies and many other businesses). These businesses want a list of people with good credit history to market their wares to.

Essentially, you're "taking yourself off their sales shelf", so they do not want you doing this, and will make it as hard as they legally can.

Also note that you pretty much cannot get a loan while your credit records at these firms are frozen, but it's actually easy to get them un-frozen and then frozen again once you get the loan.

In fact, you should find out which credit agency your bank (car dealer, whatever) uses and then only unfreeze that one.



Whoa... one guy said on Twitter this was the case in 2007!!


9:38 PM - 8 Sep 2017


OP was referring to the top reply to the linked tweet.

> Verified PIN format w/ several people who froze today. And I got my PIN in 2007—same exact format. Equifax has been doing this for A DECADE.

https://twitter.com/webster/status/906361966645710849


The only solution is to put the data in our hands only and we authorize access to it on an as needed basis. It should not be centralized anywhere.


How do you prevent User X from lying then?


Credit card x,y, and z all know my current balance, place of residence and payment history. They let me and only me know that info via some method that can be trusted by anyone (blockchain?).

At a later date, I autorize credit card w to access some computed score that I compute and verify via other trusted means (another blockchain or whatever). Credit card w never sees all my data and the verifier gets access to only what it needs. Nobody needs to trust anyone and instead trusts the chain. Just an idea.


What would be the best credit monitoring service then? Any recs of ones that have their act together?


What a joke.


There are a few things here that shouldn't surprise me[0], but do. A credit reporting agency's one product is personal data (basically, it's you). Leaking that data basically makes it worthless (or worth a lot less) and besides affecting the people who's data was leaked[1], it damages the product of their competitors. You'd think that would be something that's protected with so many layers that a breach of their web property wouldn't make much of a difference[2].

At previous employers, without going into terribly much detail, we had an asset that was treated with the kind of security that something like this should have been treated with. It was on a segregated network that could only be accessed through proxy hosts, requiring two-factor authentication. The proxy hosts were hardened (only the specific, needed, services/components installed/running, audited and firewalled to death). The devices in the secure network could not see the corporate network, let alone the Internet and the corporate network/internet could not see these devices. Even special 'management interfaces' for corporate devices were segregated. This was in addition to all of the rigor put in to securing each endpoint.

Companies need to realize that security is purely a defense related behavior. You have to be "perfect" 100% of the time, but your attacker need only be right a small number of times. The goal is to increase the number of times an attacker has to be right to get at your data. From ensuring your database accounts can only execute specific things[2], that your web servers are hardened and isolated to limit exposure, to properly configured firewalls (including application-layer firewalls/log analysis). And ensuring that employee access to high-value targets is as minimal as possible and protected thoroughly. There are both "preventative" and "reductive" technologies that need to be put in place. Preventative is designed to stop a breach, reductive is designed to ensure that if breached, the breach is either worthless (i.e. proper password hashing) or caught and interrupted before all of the data is exfiltrated. It's a lot easier to explain to investors (and your fellow countrymen) that a couple of million user accounts were exposed than it is to explain that 124 million of them left.

From the looks of it, it appears Equifax treats security like most large, non-tech businesses -- an expense that should be cut as deeply as possible. It's probably fitting that they have the word "fax" in their name. If I had a guess, they probably have mandatory security auditing requirements, they paid the least they could to meet that regulation, and got the answer they paid for (or found someone to give them the answer). I'll also guess that this PIN issue will turn out not to be the worst of the security practices in place -- I mean, how many weeks did they wait to report this[3]?

[0] I have a few years' history at a large corporation working in and around security. I've seen the ugly, though I feel that we handled things very well (incredibly well compared against Equifax!)

[1] i.e. not their customers.

[2] I'm thinking in terms of a typical SQL server, where one can eliminate table/view level access in favor of stored procedures that limit what they provide and require a level of knowledge of the operation of the system (and can be tracked by logging in a manner that identifies behavior that's not normal).

[3] And is it just me being overly cynical or does anyone else think that they waited until a historic hurricane would dominate the news cycle before going public with it? It was pretty good timing, really -- coming right off of Harvey and right into Irma, it's easy to miss this story among the other big news (one 'general news/politics' site that I expected to see all kinds of headlines on had it quite low on the fold for a day and nowhere to be found, today). Or maybe they were just waiting to give time for more of their higher-ups to sell stock. /s


[flagged]



[flagged]


This isn't okay. Don't dox this guy.

Besides, lots of "hacker" hackers are self-taught.


Calling all hackers, brute force much?


Equifax is also the only one out of the big three that shows your SSN in plaintext while you type it in on that online request form. They're just lacking in all departments it seems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: