> If you find password protected zips in the release the password is probably either "Intel123" or "intel123". This was not set by me or my source, this is how it was aquired from Intel.
Can't say I'm surprised, people are lazy.
Another large tech company I used to work for commonly used an only-slightly more complex password. But it was never changed, so people who had left the team still could have access to things if they knew the password. It was an entry point into the system more than the company's Red team.
Password protection may have been used to bypass antivirus and other filters. While you should treat dumps like this with a lot of suspicion, treat password protected zips with a heaping dose of care as they may have been used to evade automated defenses.
Antivirus are some crazy shit that may trigger on any random action and will teach people to follow the most unsafe procedures without questioning, so they can get anything done.
I've heard it put this way: If you force users to trade convenience for security, they will find a way to obtain convenience at the expense of security.
> If you force users to trade convenience for security
I _wish_ it was better security they were making the trade for. It often isn't though. These programs are large, expensive, and don't do much most of the time. I feel there's a perverse incentive for developers to make their AV products as noisy as is possible to justify their own existence.
And yet.. even with full AV rollouts locked down at the highest level, bad actors still get into networks and exploit them. So, to me it feels like our users are trading away their convenience for our misguided CYA policies.
The truth is, you don't need much in the way of AV software if you are willing to outright block certain types of files.
In most large corporations you are basically not allowed to send anything that could even potentially hide a virus except for maybe Office files (nobody yet built a compelling alternative to Powerpoint and Excel).
Typical rules already block all executable binaries, scripts and password protected archives (because they could hold binaries or scripts), etc. As a Java developer I have recently discovered my company started blocking *.java files.
A lot of this stuff (AV software) is getting deployed at all different layers of the environment. Firewalls are getting better at dynamic file analysis and file blocking, the endpoints are loaded with user behavior/analytics, av and dlp tools. AV is so omnipresent because it's in a decent amount netsec appliances these companies stand up
I could be mistaken on this, but wasn't this basically the sales pitch for Spotify? Basically saying "you'll never get rid of piracy, but you can compete with it".
This was the sales pitch for iTunes and the iTunes store:
"We approached it as 'Hey, we all love music.' Talk to the senior guys in the record companies and they all love music, too. … We love music, and there's a problem. And it's not just their problem. Stealing things is everybody's problem. We own a lot of intellectual property, and we don't like when people steal it. So people are stealing stuff and we're optimists. We believe that 80 percent of the people stealing stuff don't want to be; there’s just no legal alternative. So we said, Let's create a legal alternative to this. Everybody wins. Music companies win. The artists win. Apple wins. And the user wins because he gets a better service and doesn't have to be a thief."
Another point of reference: because they had no legal ground to stand on, HBO targeted Canadian torrenters of Game of Thrones with an e-mail saying, among other things, "It's never been easier to [watch Game of Thrones legally]!"
This was true, it had never been easier. It had also never been harder. For the entire time that Game of Thrones was being aired, the only legal way for Canadians to watch it was to pay about a hundred dollars per month for cable and the cable packages that would give them HBO. You could buy it on iTunes, but only as a season, after the season was over.
So yeah, I kept torrenting it, everyone I know kept torrenting it, and everyone hated (or laughed at, or both) HBO the whole time.
Here in the UK, Sky offer a cheap 'over-the-top' streaming alternative to their satellite offerings, [0] so you could watch Game of Thrones for £8/month, provided you didn't mind the inferior video quality.
I meant HBO! I think GoT season 1 is the only season that's had a release at that res so far.
I was really hoping to get an HDR version of the "The long night", to address some of the banding and other visibility problems present in the episode, and maybe see a bit more of what went on. But there isn't one yet. So I watched it with the lights out so that my eyes adjusted :)
But yeah, you're probably right, NowTv has massive potential to undercut their main offering.
It's true, and often it's not laziness - corporate security measures are often focused only on denying access, and they're so overbearing that, were they followed to the letter, they could easily shut the company down. It's through workarounds that actual work gets done.
Sounds like a large organizational incentive intergration failure where subpieces are at odds such that they care more about dodging blame and outside of their domain it isn't their problem. "Not My Fault/Not My Problem" as a toxic approach making balancing decisions worse.
I remember having issues with a corporate email system where base64/uuencoded data would fail to get through with a very rough dependency on size - large files had a smaller chance of getting through but it was clear that there wasn't a hard size limit. Eventually someone twigged that the problem was a "rude word" scanner, and that beyond a certain size you would hit the "scunthorpe" problem, and forbidden words would appear in the ASCII text randomly.
The thing is, usability is security. People will do anything to be able to do their job (because people like being able to, you know, eat and stuff). Things that stop you doing your job are bad for security.
I wish more of the security industry would get their frigging heads around this. PGP did less for messaging security over decades of availability than iMessage and Signal did in a few weeks of availability.
This 100%. I recall many a fun night at $BIGCORP burning the midnight oil, receiving the warning emails that my "unauthorised software" had been reported to my manager, and that it had been quarantined away for my own safety and convenience. Given that $BIGCORP was a tech firm my manager would be intensely delighted that they would receive regular midnight notifications that I was doing my job. Whatever that damn thing cost it would have been cheaper to let the malware do its thing.
Windows development seems to be fun as of recently. Didn't touch it for couple of decades.
Sometimes I think that modern Windows is a nice platform already, even comfortable. (Like, you know, C++17 is very unlike C++98.) But then I'm reminded of the necessity to run an antivirus in front of it in a corporate environment.
I intensely dislike corporate "security product" culture. For whatever reason, every IT department thinks that you have to ruin Windows with tons of invasive antivirus and monitoring software. I've seen zero evidence that these performance-killing tools are necessary. It's all theater. Microsoft itself doesn't do this shit to Windows, and neither should anyone else.
There was a discussion in our IT Security department about how to install McAfee on CoreOS servers. (For the uninitiated, CoreOS is a Linux distribution that comes without a package manager. It's intended as a base to run containers on, so you would deploy all software via container images.)
I remember someone suggesting to put McAfee into a fully isolated container that only exposes the port where it reports compliance, allowing it to scan itself to death all day long.
At one company, Symantec would also quarantine the compiler and build system. It certainly made builds exciting to have the antivirus playing Russian roulette with the entire toolchain.
Every time I went to configure a toolchain on Jetbrains' CLion, Cmake would create some test files and compile them. Windows Defender deleted every file and even the embedded toolchain. Fun :)
"You must exclude our program sub directory because temporary files are created containing interpreted code and your antivirus will ether block it outright, or lock the file so long you get application time outs"
In February, I e-mailed a python script to one of our developers to help debug an issue with their SSL configuration.
Two days ago, I needed the script again but couldn't find it. Went to our e-mail thread and it said "the following potentially malicious attachments were blocked", showing mine, but... even from my outgoing mailbox? That seems ridiculous and problematic, considering that it sent fine at the time.
I know that e-mail shouldn't be used as a replacement for Sharepoint or Dropbox or whatever, and I should have a local copy of what I need, but it just seems annoying and arbitrary.
Anyway, I just logged into Outlook Web and downloaded it from the message there. Problem solved.
If I had to deploy AV for mail, I would absolutely scan outgoing mail as well. Imagine if some compromised mail account in my org sends malware to accounts in other companies. These companies could then sue my company for negligence if they can show that we did not scan our mail for viruses on outbound (which could potentially be done by examining mail headers).
This has happened to me with gmail. Zipfiles I had sent in the past are no longer allowed to be downloaded from my sent items folder through the standard interface.
To be fair, emailing binaries (apart from known types such as images, PDFs, etc.) is a rare enough use case for legitimate purposes and an easy enough way of spamming malware to clueless random people that it's probably a reasonable default for gmail.
Having an option to allow them might be okay though. (I barely use gmail so I don't know if it has one or not.)
For not sending binaries by email - there is no shame to being young in this case as it means never developing the bad habits.
Before Dropbox and similiar it was far more a norm and various file sharing systems like SharePoint may wind up not actually used. Non-computer technical people often do so in companies all the time and practically use it as an ersatz version control system to the cringe of IT.
We just rename our files with .novirus on the end. I assume the main point is to stop executables from outside running with a click, or internal forwards of the same by compromised users which is why it's so easy to bypass.
Yes. Whenever I email or transfer a zip via any method really I always put a basic password on it.
I've been bitten way too many times by dumb filters that pick some file out of the zip and declare that it is malicious. I also don't trust messenger apps to not pull my files out and do who knows what with them. A basic password prevents this junk 99% of the time for almost no effort.
It won't stop a determined system from cracking the password. But that isn't what I'm trying to defend against.
This brings back happy memories of a college (senior high for the Americans in the audience) computing teacher finding a friend and I had been writing irritating malware instead of doing actual work, and his only comment being “if you’re going to email that to yourself change the extension so it doesn’t get flagged for IT support”.
Gmail won't even let you send a JAR file, or a zip you made out of a project where it happens to be a .jar file somewhere deep in some random subdirectory.
I have left Intel couple of years ago, that's exactly what passwords were used for. It was pretty annoying to try to send files and putting them in encrypted archive wast the most convenient method.
It was not just for binaries but for scripts, html, etc.
I was an admin for a medium sized company and handled their websites. Almost all of them (about a dozen or so) were hosted on Go Daddy. Plus they had about two dozen reserved domains they were sitting on like www.yourcompanysucks.com and others.
I left the company 5 years ago. Just checked the login to see if it still worked.
Yeap.
Any disgruntled employee could change the password, lock them out of all of their sites (including several e-commerce sites that amount for a large chunk of revenue) and then if they really wanted to, delete all of them.
I remember talking the main network guy about any backups when a lot of the ransomware stuff was making the rounds. The big, really big stuff on their network (mostly ERP stuff) was backed up in two or three places. Their web stuff? Yeah. . . NOPE.
Pretty scary how lazy people are about stuff like that.
I wonder if a malware should just grep for "pw:" or "password:" and then try the string it finds against anything encrypted. Or forward it to the control center.
I worked for a company that made servers. In the on board management system's source code I remember seeing "base64 encryption". I think they removed it by the time I left, but still.
A company I know insists on rotating passwords fairly often. Everybody just increases the number at the end of their favourite password, i. e. intel1255
I once worked at a place that required passwords to be changed every month and contain at least one upper and lower case letter, digit, and punctuation, and not match any previous password.
So the password for August, 2020 would be “August, 2020”.
This is super common, to the point where Microsoft used a similar password scheme as an example when talking about password spraying attacks at an RSA conference presentation
It's why I'm advocating within my organisation to get rid of password expiration and enforce 2FA for clients, but there's a lot of inertia to push against with some of them. At least uptake of 2FA is consistently increasing.
If you need backup, NIST standards agree with you.
Scheduled password expiration weakens security by encouraging users to make predictable passwords, and by entrenching password resets as a routine and unscrutinized process.
Many DoD websites are the same. It's so annoying. I use a password manager at home but at work I don't have that luxury (installable software is tightly controlled and very limited).
Also, the passwords are listed in docs that appear to be alongside the encrypted files. That's a bit like leaving the keys to your house _on top_ of your front doormat.
It's kinda like hiring a security guard for insurance purposes, even though they have strict instructions to never do anything, under any circumstances, other than call emergency services.
It's kinda like hiring a security guard for insurance purposes, even though they have strict instructions to never do anything, under any circumstances, other than call emergency services.
at my first job they used a similar password as their go-to "temporary" password for users etc. I found later when I got to work with the users that they rarely changed this password even when "forced" to, and in many cases had it up on post-its next to their monitor.
and in many cases had it up on post-its next to their monitor.
These days a post it is probably the best way to secure your password.
99.9999999% of password hacks come over the wire now, from people in other cities, states, or nations. If someone is in your building, in front of the computer, even without the post-it, you're probably toast.
At a previous workplace we had a few places in the code which used the word backdoor. It was not an actual backdoor though, but merely a debugging server that could be enabled and allowed you to inspect internal state during runtime. At some point I removed the word backdoor, fearing it would get to a customer or during an audit someone would misunderstand. :|
Once I got a complaint from a security auditor that some code was using MD5. It wasn’t being used for any security purpose, just to check whether an autogenerated file had been manually edited. We decided it was easier to do what they wanted than argue with them, so we replaced it with CRC32C. That would have been faster than MD5, but nobody cares about saving a few milliseconds off reading a configuration file at startup. It would have made the manual edit check somewhat less reliable, but probably not by much in practice. But the security auditor was happy we’d stopped using MD5
You don’t actually need to listen to auditors. People like you (who can’t be bothered to argue because it’s apparently too hard) is the reason that smartass is still selling their services.
You either have way more grit at arguing than most people or you haven't worked at a large and cumbersome organization.
I know most people at those kinds of organizations just don't have the grit to fight every one of those battles all over again, and choose to do the things they can affect with reasonable effort instead.
I'm not saying that grit would be a bad thing to have. I appreciate the people who do it. But you really can't know what kinds of situations the parent commenter was in, and sometimes you can't really expect everyone to want to fight it.
Sometimes the point isn't technical, but social. So MD5 isn't used for security purposes right now. At some point someone will want some hashing function, and they'll probably look at what the code already uses. The last thing you want is someone a bit clueless goi g "it was good enough there, it's good enough here" and using MD5 where they shouldn't. Removing it from a codebase helps with that problem.
The problem here is that people assume they know every possible reason why the auditor might ask for something, when they don't. If the auditor is asking for it, and it costs almost nothing to do, maybe just do it instead of wasting everyone's time by acting like you know the totality on the subject, and everyone will probably go home happier at the end of the day.
Isn't that what code review is for? To me that sounds like arguing against string formatting because someone could think it's ok for SQL queries.
An auditor's job doesn't end at saying what things should be changed, it should include why as well (granted, we don't know the full content of the auditor's report here, maybe they did say why).
The reason why CRC32C was chosen as a replacement instead of SHA-2 or whatever - what happens if in a few more years, SHA-2 isn’t considered secure any more and some future security audit demands it be changed again? Whereas, a CRC algorithm isn’t usually used for security purposes, so a security audit is far less likely to pay any attention to it. The whole issue started because a security-related technology was used for a non-security purpose.
> what happens if in a few more years, SHA-2 isn’t considered secure any more and some future security audit demands it be changed again
Then change it again? If you use the most recent available NIST standard it should hopefully be a very long time before meaningful (let alone practical) attacks materialize (if ever). If you end up needing to worry about that in a security audit, consider it a badge of success that your software is still in active use after so many years.
Using an insecure hashing algorithm without a clear and direct need is a bad idea. It introduces the potential for future security problems if the function or resultant hash value is ever used in some unforeseen way by someone who doesn't know better or doesn't think to check. Unless the efficiency gains are truly warranted (ex a hash map implementation, high throughput integrity checking, etc) it's just not worth it.
> a security-related technology was used for a non-security purpose
I would suggest treating all integrity checks as security-related by default since they have a tendency to end up being used that way. (Plus crypto libraries are readily available, free, well tested, generally prioritize stability, and are often highly optimized for the intended domain. Why would you want to avoid such code?)
Ahh poop, looks like I was out of date. Apparently a practical demonstration of an attack with complexity ~2^60 was recently demonstrated against legacy GPG (the v1.4 defaults) for less than $50k USD. [1] That being said, it looks like it still required ~2 months and ~900 GPUs versus MD5 at 2^18 (less than a second on a single commodity desktop processor).
So yeah, I agree, add SHA-1 to the list of algorithms to reflexively avoid for any and all purposes unless you have a _really_ good reason to use it.
The reason they ask is that they have to fill a checkbox that says "no MD5" and of course they're don't know that CRC32 is worse
And to be very fair, a lot of security issues would be caught with basic checkbox ticking. Are you using a salted password hashing function instead of storing passwords in plaintext? Are you using a firewall? Do you follow the principles of least privilege?
Because most times people aren't "just right", they're just unwilling to widen their point of view, and/or they turn the issue into a way to assert their own importance and intellect over someone else at the expense of those they work with.
I don't need some coworker getting into some drawn out battle about how MD5 is fine to use when we can just use SHA (or CRC32C as that person did, which is more obviously non-useful for security contexts) and be done in 30 minutes. The auditor is there to do their job, and if what they request is not extremely invasive or problematic for the project, implementing those suggestions is your job, and arguing over pointless things in your job is not a sign of something I want in a coworker or someone I manage.
> they turn the issue into a way to assert their own importance and intellect over someone else at the expense of those they work with.
This is exactly what the auditor is doing.
How can you not see the irony here?
> I don't need some coworker getting into some drawn out battle
This isn't a drawn out battle. This is a really fast one, md5 is fine here, you didn't check the context of its use, thats fine, whats the next item on your list?
Whats fucking hard about that?
Is this some kind of weird cultural thing with American schooling teaching kids they can't question authority?
The auditor was asked to do it and is being paid to do it. Presumably, the people arguing are paid to implement the will of those that pay them. At some point people need to stop arguing and do what they're paid to do or quit. Doing this over wanting to use MD5 seems a pretty poor choice of a hill to die on.
> This is a really fast one, md5 is fine here, you didn't check the context of its use, thats fine, whats the next item on your list?
There are items like this all throughout life. Sure, you can be trusted to drive above the speed limit on this road, and maybe the speed limit is set a little low. But we have laws for a reason, and at some point you letting the officials know that the speed is two low and they really don't need to make it that low goes from helpful to annoying everyone around you.
> Whats fucking hard about that?
Indeed, what is so hard about just accepting that while you're technically correct that MD5 isn't a problem, you're making yourself a problem when you fight stupid battles nobody but you cares about, but everyone has to deal with?
> Is this some kind of weird cultural thing with American schooling teaching kids they can't question authority?
Hardly. Pompous blowhards exist in every culture. Also, that's hilarious. Your talking about a culture that rebels against authority just because they think they that's what they're supposed to do, even if it's for stupid reasons and makes no sense. See the tens of millions of us that refuse to wear masks because it "infringes on our freedom".
I'm paid to tell idiots where to go. My boss doesn't pay me 6 figures to toe the line and fill in boxes. She pays me to use my judgement to move the company forward. I'm not wasting my time and her money on this sort of garbage and if they can't see the difference between casual use and secure use them we need to rethink our relationship with this company or they need to send us someone new.
> Your talking about a culture that rebels against authority
You just used the line "do what you're told or quit".
I've very specifically couched all my recommendations for this for when it's trivial to do. Arguing about this with someone instead of doing it, when doing it may have some benefits but really only costs a few minutes instead of just doing so is definitely wasting her time and money.
> You just used the line "do what you're told or quit".
I noted what I wished people would do in very specific cases where they're wasting way too much time and effort to win a stupid argument rather than make a small change of dubious, not possibly not zero, positive security impact.
I don't see anything weird about acknowleding some of the extreme traits of the culture I live in while also wishing they would change, at least in specific cases where I think they do more harm than good.
Honestly, I'm confused why you would even make some cognitive leap that since I live in an area with a specific culture I must act in the manner I described that culture, especially when I did it in a denigrating way. I guess you think all Americans must be the same? That doesn't seem a useful way to interact with people.
As a technical choice, that's true. So the argument shouldn't be hard to win, assuming you're dealing with reasonable people, who are also answering to reasonable people. Those people (e.g. the leadership) also need to care enough about that detail to just not dismiss your argument because making the change is not a problem for them. And they need to not be so security-oriented (in a naive way) as to consider a "safer" choice always a better one regardless of whether there's a reasonable argument for it or not.
That's more assumptions than it is sometimes reasonable to make.
"You don’t actually need to listen to auditors" is decidedly not true for a lot of people in a lot of situations, and arguing even for technically valid or reasonable things is an endurance sport in some organizations.
I mean, I even kind of want to agree with heavenlyblue's argument that you should fight that fight for the exact reason they're saying, and can see myself arguing the same thing years ago, but at least in case of some organizations, blaming people for taking skissane's stance would be disproportionate.
Oh sorry, I thought we were discussing working with rational people.
If you're working with irrational people you're going to have to do irrational things, but that's kind of a given isn't it? We don't really need to discuss that.
Not hard to win if everyone is being reasonable. Given an auditor that thinks all uses of MD5 are proscribed, what would you put the odds of them being reasonable at?
ETA: per 'kbenson it's not hard to conceive of a situation where proscribing MD5 is reasonable. Taking 'skissane's account at face value is probably reasonable, but my implicit assumption that the auditor would not explain if pressed isn't being charitable.
For now 10 years, I refuse to acknowledge the finding of the consulting company which flags the password scheme I use (passphrases) because the norm they use (a national one) talks about czps, symbols etc.
I refuse to sign off and note that our company is a scientific one and to the difference of the auditors, we understand math taught to 16 yo children.
This goes to the board who gets back to me, I still refuse on ethical gtounds and we finally pass.
This is sad that some auditors are stupid when some other are fantastic and that you depend on which one you get assigned.
Sometimes customers demand security audits as part of sales contracts. If it is a high enough value deal, the company may decide it is in their business best interest to say yes. In that scenario, not listening to the security auditor is not a viable option. You need to keep them onside to keep the customer onside.
Similarly, sometimes in order to sell products to government agencies you need to get security audits done. In that scenario, you have to listen to the security auditor and keep them onside, because if you don't keep them happy your ability to sell the product to the government is impeded.
I have a feeling that these auditor people just make up bullshit when they can't find something real. The last few we have got have come up with total non issues marked as severe because they are easy to "exploit".
Meanwhile I have been finding and fixing real security issues regularly. To be fair it would be extremely difficult for an external person to find issues in the limited time they have so the audit comes down to someone running through a list of premade checks to see if they find anything.
One thing I learned when I worked in internal IT security when dealing with auditors was that they will boil the ocean to find an issue, so never be perfect and leave a few relatively easy but not obvious to spot issues for them to write up that don't actually affect the security of your environment. If you don't leave them this bait, they will spend weeks to find a trivial issue (like using MD5 to check for config file changes vs password hashing) and turn it into a massive issue they won't budge on.
The other issue is that if you make it seem too easy to answer their questions or provide reports, they will only ask more questions or demand more reports so even if its just dumping a list of users into a CSV file for them to review, make it seem like way more effort than it actually is otherwise you might find you've been forced into a massive amount of busy work while they continue to boil the ocean.
Smart auditors ask for all items at the beginning of the audit.
Smart IT people give them all items at the end of the audit.
Auditors have only a limited time budget. The later they get answers, the less time for them is left for follow-up questions.
3D chess! I agree sometimes it feels as if the security review questions are just set-ups for follow-ups that they didn’t include in the initial form (for whatever reason)
I've had audits like that, many are just for CYA and I'm often the dev patching obscure (or not so obscure) security issues.
Honestly, I'm quite happy to have an auditor nitpick a few non-issues if the alternative is risking releasing an app that has a basic sql injection attack that wiggled past code review due to code complexity.
I've also had an external audit that found an unreported security issue in a new part of a widely used framework, so there are auditors out there that do a good job of finding legitimate things.
Some years ago I worked in $BIGBANK and auditor from $GOVERMENT told as to change street name property from textfield to dropdown (for all countries) to help them with fraud detection, and remove all diacritic characters from client names their new software don't like them.
I told my manager that they are idiots and I won't listen them, he was like 'OK, as I expected' never done anything about it, next auditors didn't mentioned it.
This makes me wonder about the reliability of address verification technology.
There are plenty of addresses where the official version in databases is slightly off from what people actually write on their mail. If I got a credit card transaction with the "official" version, that would be a significant fraud signal, that they were sourcing bogus data from somewhere.
So much this. My company just got done shelling out a ton of money for some asshat to tell me that we can't use http on a dev server. <head smashes through desk>
I actually think that's valid. Sure, http on a dev machine isn't a security risk. But there is a tail risk that it ends up somewhere on a system that sends data between machines. Also, using http on dev and https on prod can lead to unexpected bugs. Banning http is not unreasonable.
Same with the md5 complaint. That use of md5 wasn't a problem but there's a perfectly fine alternative and if you can ensure by automated tests that md5 is used nowhere, you also can guarantee that it's never used in a security relevant context.
> and if you can ensure by automated tests that md5 is used nowhere
You can automatically check for the string "md5" in identifiers, but you can't reliably automatically check for implementations of the MD5 algorithm. All it takes is for someone to copy-paste an implementation of MD5 and rename it to "MyChecksumAlgorithm" and suddenly very few (if any) security scanning tools are going to be smart enough to find it.
(Foolproof detection of what algorithms a program contains is equivalent to the halting problem and hence undecidable, although as with every other undecidable problem, there can exist fallible algorithms capable of solving some instances but not others.)
It's worse when the asshat convinces your manager that every internal site, whether dev or not needs https. Certs everywhere. Our team spends a decent % of our time generating and managing certs...
Are you talking about a fully internal site, with not even indirect Internet access? For those kinds of airgapped applications, you should maintain your own CA infrastructure, and update all clients/browsers to trust its certificates.
For the more common scenario of internal sites/services which are not accessible from the public Internet, but not fully isolated from it either:
You don't need the internal site exposed to the Internet. If you use DNS-01 ACME challenge, you just need to be able to inject TXT records into your DNS. Some DNS providers have a REST API which can make this easier.
Another option – to use HTTP-01 ACME challenge, you do need the internal host name to be publicly accessible over HTTP, but that doesn't mean the real internal service has to be. You could simply have your load balancer/DNS set up so external traffic to STAR.internal.example.com:80 gets sent to certservice.example.com which serves up the HTTP-01 challenge for that name. Whereas, internal users going to STAR.internal.mycompany.com talk to the real internal service. (There are various ways to implement this – split horizon DNS, some places have separate external and internal load balancers that can be configured differently, etc)
Yet another option is to use ACME with wildcard certs (which needs DNS-01 challenge). Get a cert via ACME for STAR.internal.medallia.com and then all internal services use that. That is potentially less secure, in that lots of internal services may all end up using the same private key. One approach is that the public wildcard cert is on a load balancer, and then that load balancer talks to internal services – end-to-end TLS can be provided by an internal CA, and you have to put the internal CA cert in the trust store of your various components, but at least you don't have the added hassle of having to put it in your internal user's browser/OS trust stores.
(In above, for STAR read an asterisk – HN wants to interpret asterisks as formatting and I don't know how to escape them.)
Which means if someone gets access to the internal network, they can read all traffic. And even dev systems can send confidential data. With letsencrypt and easy to generate certificates, https everywhere is very reasonable.
Even with VPN. I don't want any person on the vpn to be potentially able to read traffic between internal services. I think that would fail many audits.
It does though. There's no excuse for unencrypted traffic. Google doesn't have some VPN with squishy unencrypted traffic inside. Everything is just HTTPS. If they can do it, so can you. It's just not that hard to manage a PKI.
Does your organization disable the "Non-secure" prompt in the browser as well? If not, I'd say that it does seem like a security risk to train your users to ignore browser warnings like that.
It's not easily automated. Somehow, you have to safely get a certificate across the air gap to the internal network.
So I guess an internet-connected system grabs the certificates, then they get burned to DVD-R, then... a robot moves the DVD-R to the internal network? It's not easy. It's all much worse if the networks aren't physically adjacent. One could be behind a bunch of armed guards and interlocking doors.
An airgapped network can include its own internal CA, and all the airgapped clients can have that internal CA's certificate injected into their trust stores, and all the services on the airgapped network can automatically request certificates from the internal CA – which can even be done using the same protocol which Let's Encrypt uses, ACME, just running it over a private airgapped network instead of over the public Internet.
We have a ton of internal stuff, most of it doesn’t even have external DNS. We use long lived certs signed with our own CA. we’d prefer using and automated solution, using a “real” CA, but non seems to be available.
I was on the receiving end of a security audit issue. I closed the bug s won't fix, my lead approved it, but when the team who paid the security auditor found out they demanded I fix it. I had to argue with it, infosec, and the auditor. Nobody really cares what I did, they just wanted to follow the rules. After a month of weekly hour long meetings I relented and changed the code.
You're often not arguing with the auditor, you're arguing with the person who paid the security auditor in the first place who is likely not even technical. That's a battle toy will likely never win.
To add to this, often the primary goal of the person who paid the security auditor is not to actually increase security. It is to get to claim that they did their due diligence when something does happen. Any arguments with the auditor, no matter how well founded, will weaken that claim.
Depends I suppose. When your CFO tells you to fix it so you're in compliance, your opinion doesn't matter a whole lot. Never mind if it is a government auditor or their fun social counterpart the site visitor.
I once got cited for having too many off-site backups. They were all physically secure (fire proof safes or bank lock box), but the site visitor thought onsite was fine for a research program. The site visitor's home site lost all its data in a flood.
Sometimes an inexperienced auditor will show a minor finding that is a sign of a bigger issue. For example, if Windows is in FIPS mode, some MD5 functions will be disabled.
If you need to be operating in FIPS 140 mode, that may be a problem of some consequence.
And it's good! Code Reviews can't surface all issues. Independent audits should be welcomed by developers to find more bugs and potential security risks (even though I'm a bigger fan of penetration tests instead of audits).
When you're trying to keep a company of 100,000 employees secure, you can't have an approach that says "let's figure out where we need to remove MD5 and remove it." You have to set an easy to understand, consistent guideline -- "tear out the MD5" -- so that there won't be any doubt as to whether it's done, some teams won't complain that they shouldn't have to change it because some other team didn't have to change it, etc. And then every time they do a security audit the same thing will come up and cause more pointless discussion.
In isolation it looks like wasted work but in terms of organizational behavior it is actually the easiest way.
Happened to me as well. Was writing an authentication service. We thought we were paying for an actual security audit, turns out we payed for a simple word scanning of our codebase. The review didn't find any of the canaries we left in the codebase, and we could never argue back with them. Big waste of money.
Huh. I'm thinking it'd be fun to write code with know issues (with varying degrees of obviousness) and hire a bunch of different "auditing companies" to see which ones pick up on that.
Publish the result for market comparison's sake.
Then again, that requires plenty of money and I can't see how to monetize that in any way.
Not only that but MD5 still doesn't have an effective preimage attack, so it is still good enough for things like hashing passwords or to check is someone else didn't tamper with your files.
Still, when it comes to security:
- MD5 is actually too fast for hashing passwords, but there is still no better way than bruteforce if you want to crack md5-hashed-salted passwords.
- Even if there is no effective preimage attack now, it is still not a good idea to use an algorithm with known weaknesses, especially if something better is available.
What MD5 is useless for is digital signature. Anyone can produce two different documents with the same MD5.
Defense in depth, if you can grep the source code and not find any references to md5, then you have quickly verified that the code probably doesn't use md5.
This you can easily verify again later, you can even make a test for it :)
Even if in practice this had no impact, removing md5 usage, will make it harder to accidentally introduce it in the future.
The issue is not md5. The issue one wants to detect is weak hash functions used in cases where they're not appropriate. The fact that crc32 passed means that any obscure hash function would have passed too, even if it had been used in a context were it isn't appropriate.
All it means that the audit is superficial and doesn't catch the error category, just famous examples within that category. That kind of superficial sanning may be worth something when unleashed on security-naive developers or even as optional input for more experienced ones. But "hard compliance rules" and "superficial scans" combine to create a lot of busywork which makes people less motivated to work with auditors instead of against them.
Both perspectives are somewhat correct, I feel; the requirement to remove any usage of md5 is beneficial, but the fact that crc32 passed means the audit shows the motivation was misplaced.
The resulting situation might of course not be a net benefit though :/
> But "hard compliance rules" and "superficial scans" combine to create a lot of busywork which makes people less motivated to work with auditors instead of against them.
Absolutely :)
The fact is that if you have experienced engineers a security audit is rarely able to find anything. You would basically have to do code reviews, and this is hard / expensive, and even then rarely fruitful.
So, superficial scans, hardening, checking for obvious mistakes is really all you can do.
Making hard rules is unproductive, but then again, migrating from md5 to crc32 hopefully isn't very expensive.
IMO, crc32 is a better choice for testing for changes, and has the benefit of removing any doubt that the hash has any security properties.
Next up: Replace MD5 with BASE64+ROT13. Significantly worse functionality AND performance, but sounds more secure (to a layman) and doesn't trigger the "MD5" alert...
Base64 encoding does protect somewhat against "looking over your shoulder" attacks
(Unless the person looking over your shoulder has a really good memory and can remember the Base64, or decode it in their head. Or they have a camera.)
But way more people would use md5 for password hashing than crc32. Of course someone could circumvent these tests, but the risk of someone copying an old tutorial where md5 is used for password hashing can be mitigated.
I've seen similar rigidity from security audits. Stuff like "version 10.5.2 (released last week) of this software introduced a security bug that was fixed in 11.0 (released today), we need you to update from 10.5.1 (released last week + 1 day) to 11.0 now because our audit tool says so".
It seems like a thin line between a debugging feature and a backdoor; "merely a debugging server that could be enabled and allowed you to inspect internal state during runtime" seems like a backdoor to me, doubly so if it's network-accessible. If Intel has, say, an undocumented way to trigger a debug mode that lets you read memory and bypass restrictions (ex. read kernel memory from user mode, or read SGX memory), is that not a backdoor? Or is the name based on intent?
I think the difference is whether it's something that's always enabled. You could presumably make it available or not at compile time, so the software shipped to a customer wouldn't have it, but maybe if they were having issues, you could ship them a version with the debug server with their permission.
I can agree with that with the caveat that "enabled" has to be at either something that only the user can do. If it requires that the customer intentionally run a debug build, that's fine; if it can be toggled on without their knowledge, then it's a problem.
It was disabled by default, and could only be enabled using environment variables. Even when enabled, the whole thing ran in Docker and the socket was bound to loopback, so you could only connect to it from within the container.
When the intention is a debugging server, making it exposed to the world is a mistake and a security vulnerability. At that point it is effectively a backdoor, but the difference between a high level vulnerability such as this and a backdoor is developer intent.
Sure, it's simple. But you would have to be able to modify the container settings anyway. For all practical uses, and certainly in my case, you could just make it run a different image at that point. Or copy another executable into the container and run it. You're already privileged. Requiring you to be privileged to access the debug server means it's secure.
Until things around change and what was previously "a secure backdoor" becomes a "less secure backdoor". ;-)
One can read every second week about cases where some backdoor that was meant to be used "only for debugging" landed in the end product and became a security problem.
Actually I usually suspect malice when something like that is found once again, as "who the hell could be so stupid to deliver a product with a glaring backdoor". But maybe there is something to Hanlon's razor… :-D
I'm talking about the general sentiment. You can see this on every* site, HN included. The litmus paper is that even pointing out something objectively true will get criticism (downvotes) rather than critical thinking. In the current atmosphere nobody asks the question when it comes to China/Russia/NK/Iran but will when it comes to the US despite the known history of hacking/spying on everyone else.
*Recently a reputable tech site wrote an article introducing DJI (ostensibly a company needing no introduction) as "Chinese-made drone app in Google Play spooks security researchers". One day later the same author wrote an article "Hackers actively exploit high-severity networking vulnerabilities" when referring to Cisco and F5. The difference in approach is quite staggering especially considering that Cisco is known to have been involved, even unwittingly, in the NSA exploits leaked in the past.
This highlights the sentiment mentioned above: people ask the question only when they feel comfortable that the answer reinforces their opinion.
A manufacturer wanted to upgrade one of their equipment lines to be more modern. The developers of the original product, both hardware and software, were no longer with the company.
Since they just wanted to add some new features on top and present a better rack-based interface to the user, they decided to build a bigger box, put one of the old devices inside the box, then put a modern PC in there, and just link the two devices together with ethernet through an internal hub also connected to the backpanel port and call it a day.
The problem is, if you do an update, you need both the "front end" and the "back end" to coordinate their reboot. The vendor decided to fix this by adding a simple URL to the "backend" named: /backdoor/<product>Reboot?UUID=<fixed uuid>
Their sales team was not happy when I showed them an automated tool in a few lines of ruby that scans the network for backend devices and then just constantly reboots them.
They still sell this product today. We did not buy one.
They sold very expensive devices that were actually an off-the-shelf 1U PC with custom software (which provided the real value). The problem — and this dates it — was that the PCs had a game port¹, which gave away that this custom hardware was really just a regular consumer PC. So they had some fancy plastic panels made to clip on the front and hide the game port.
I remember early in my career I came across a Unisys “mainframe”, which was literally a Dell box with a custom bezel, clustered with a few other nodes with a Netgear switch.
Many non-IBM mainframe vendors switched to software emulation on more mainstream platforms-nowadays mainly Linux or Windows on x86, but in the past SPARC and Itanium were also common choices. What you saw may have been an instance of that. A software emulator can often run legacy mainframe applications much faster than the hardware they were originally written for did.
(With Unisys specifically, at one point they still made physical CPUs for high end models, but low end models were software emulation on x86; I’m not sure what they are doing right now.)
I don't know the details (~20 years ago), but pretty sure you hit the nail on the head. I think one of the boxes I saw were a hybrid -- Xeons with some sort of custom memory controller.
It was my first exposure to this sort of thing, and I was taken aback by the costs of this stuff, which made the Sun gear I worked with look extremely cheap :)
> I was taken aback by the costs of this stuff, which made the Sun gear I worked with look extremely cheap :)
Given the shrinking market share of mainframes, the only way for vendors to continue to make money is to increase prices on those customers who remain – which, of course, gives them greater encouragement to migrate away, but for some customers the migration costs are going to be so high that it is still cheaper to pay megabucks to the mainframe vendor than do that migration. With emulated systems like the ones you saw, the high costs are not really for the hardware, they are for the mainframe emulation software, mainframe operating system, etc, but it is all sold together as a package.
At least IBM mainframes have a big enough history of popularity, that there are a lot of tools out there (and entire consulting businesses) to assist with porting IBM mainframe applications to more mainstream platforms. For the remaining non-IBM mainframe platforms (Unisys, Bull, Fujitsu, etc), a lot less tools and skilled warm bodies are available, which I imagine could make these platforms more expensive to migrate away from than IBM's.
Even if this poster made it up, I'm certain it is also true at least once over, having remediated a near-identical problem from one of my employers' products at one point, and talked developers out of implementing it at least once at a different employer.
The older I get, the less I care if individual stories like this are true. The fact that they could be is concerning enough :) And they are educational nonetheless.
The time iDrac annoyed me the most is when I bricked a server trying to update it.
I made the terrible mistake of jumping too far between versions and the update broke iDrac and thus the server. There was no warning on Dell's website nor any when I applied the update. I only found out what happened after some googling where I found the upgrade path I should have taken.
This is just terrible quality control and software engineering.
At my previous employer our code was littered with references to a backdoor. It was a channel for tools running in guest operating systems to talk to the host hypervisor through a magic I/O port.
> I must warn you about those jokes. Firstly, they are translated from Russian and Hebrew by yours truly, which may cause them to lose some of their charm. Secondly, I'm not sure they came with that much charm to begin with, because my taste in jokes (or otherwise) can be politely characterized as "lowbrow". In particular, all 3 jokes are based on the sewer/plumber metaphor. I didn't consciously collect them based on this criterion, it just turns out that I can't think of a better metaphor for programming.
> Seems like an outdated term. Downvotes accepted.
Manhole is, indeed, an outdated term. Generally the preferred term is "Maintenance Hole". Still abbreviated MH, and people in the field use all three interchangeably (much like metric/imperial).
Source: I work with storm/sanitary/electrical maintenance holes.
The link is about the debate as it is, but I would also encourage the use of good faith in interpreting any speaker: that is, assuming a person referring to "mankind" likely means all humans without exclusion based on gender or sex, and requiring some other material evidence before presuming bias.
I also wonder what these discussions are like in languages where most nouns are gendered, e.g., in French.
No clue about French but in German they started to use both versions at the same time glued together in made-up "special" forms. It's like using "he/she" for every noun. This makes texts completely unreadable and you need even browser extensions[1] to not go crazy with all that gendered BS language!
OK, I exaggerate, there are still people that don't try to be "politically correct" and still use proper language, and know that there is such a thing called "Generisches Maskulinum (English: generic masculine)"[2]. But in more "official" writings or in the media the brain dead double-forms are used up until the point you can't read such texts any more: Those double-forms (which are not correct German) cause constant knots in the head when trying to read a text that was fucked up this way.
(Sorry for the strong words but one just can't formulate it differently. As the existence of that browser extensions shows clearly I'm not alone when it comes to going mad about that rape of language. Also often whole comment sections don't discuss a topic at hand but instead most people complain about the usage of broken "gendered" pseudo-politically-correct BS language. That noun-gendering is like a disease!)
Believe it or not, we introduced a variant of bash brace expansion (except with implicit braces and dots instead of commas) in our grammar, named it “écriture inclusive”, and called it a day.
The way it kicks words previously loaded with neutrality in the curb but happened to have the same spelling as the gendered one, and entrenches a two-gender paradigm boggles the mind as to how it flies in the face of any form of inclusivity.
That and I still don’t know how to read “le.a fermi.er.ère” aloud. It’s just as ridiculous as “cédérom” because Astérix puts up a show at standing against the invader.
> In practice, grammatical gender exhibits a systematic structural bias that has made masculine forms the default for generic, non-gender-specific contexts.
many instances of this are simply an artifact of 'man' previously being an un-gendered term. but that fact is much harder to build group cohesion around than grievance.
I have learned that flat out telling people that a hill isn't worth dying on tends to cause a bunch of corpses to collect up - if you don't want a molehill covered in bodies you need to persuade them to go die somewhere else.
Yeah agree. And I think we could agree replying "Ew" and loosing a little bit of HN karma does not constitute more than bruising.
EDIT: didn't see the "or even" there. Disagree. I think the analogy can be drawn out a bit, so I'll say that a bruise can heal pretty quick, and one would adapt better to climbing "hills" if they exercised regularly. Plus maybe smaller hills should be climbed too.
I'm really not at all interested in people explaining to me how finding mentions of back doors in technology used in millions of computers is probably OK because it may mean something else.
Given US security apparatus clearly values and desire these back doors and have the necessary power to coerce companies to making them, generalizing the use of "back door" as a term for debugging or w/e seems almost expected.
Even if they are for debugging "oops it's on in production!" is a great cover because none of these companies will EVER admit back doors were required by the government.
I worked at a place where IT had an admin user on every machine named "Backdoor". I opened a ticket when I noticed it, which was promptly closed explaining that it was normal.
The same place had a boot script on every computer that wrote to a network-mounted file. Everyone had read permissions to it (and probably write, but I didn't test) and the file contained user names, machine names, and date-times of every login after boot for everyone on the domain going back 5 years. I opened a ticket for that, which was never addressed.
indeed it literally was the author's suggestion to search for the word 'backdoor':
>This code, to us, appears to involve the handling of memory error detection and correction rather than a "backdoor" in the security sense. The IOH SR 17 probably refers to scratchpad register 17 in the I/O hub, part of Intel's chipsets, that is used by firmware code.
Judging from the current, in all likelihood it is the opcode that APEI (a part of ACPI) tables write to port 0xB2 in order to invoke firmware services that run in system management mode.
>merely a debugging server that could be enabled and allowed you to inspect internal state during runtime
When we talk about CPU it's bad enough. Think that your program has an input and output streams where most of the app data goes through and I can attach debugger and listen on the data.
I would not be very happy about it and would still consider it backdoor.
Because until this thing gets diffused and dissected by everyone and their mothers, the law is likely to view it as publication of confidential trade secrets, and people who can be confirmed to be spreading such things can get federal time, e.g. [1] for example. Using a VPN is the barest of mechanisms to try to obscure your identity to avoid this sort of punishment.
I think there's a big difference between selling chemical secrets to a hostile government and this torrent. Namely, that no one is selling this information, it's available to anyone who can grab a magnet file.
Here is the real thing: are you confident enough in your statement to argue that way when confronted by your government (or whatever is the concerned body here)? If yes, then feel free to do whatever you want with your free time and bandwidth, but otherwise you're better to stay as far as possible from these data.
I notice that you have no one in your circle of acquaintances who has illegally downloaded movies about torrents and got caught. I don't know how it is in other countries, but here in Germany friendly people ring your doorbell and take everything that is connected to electricity :). And if there is any data in there that is very damaging to Intel, then I think they will take the trouble to look for these people (at least in certain countries)
i'm sorry you live in a hellhole country and your friends don't understand how bittorrent works. maybe one day you can immigrate to a second-world country and grow some cojones, but until then you should continue living in fear and scaring your peers from downloading leaks early when there aren't fed trackers.
IANAL and you should probably contact yours about such things but a straightforward reading suggests that because you knew you were downloading something likely illegally gotten, you are in fact on the hook for downloading it.
“Misappropriation” means:
(i) acquisition of a trade secret of another by a person who knows or has
reason to know that the trade secret was acquired by improper means; or
(ii) disclosure or use of a trade secret of another without express or
implied consent by a person who
(A) used improper means to acquire knowledge of the trade secret; or
(B) at the time of disclosure or use knew or had reason to know that
his knowledge of the trade secret was
(I) derived from or through a person who has utilized improper means
to acquire it;
(II) acquired under circumstances giving rise to a duty to maintain
its secrecy or limit its use; or
(III) derived from or through a person who owed a duty to the person
seeking relief to maintain its secrecy or limit its use; or
(C) before a material change of his position, knew or had reason to
know that it was a trade secret and that knowledge of it had been
acquired by accident or mistake.
Depends heavily on the jurisdiction, I am afraid. This exact case was used as a precedent where I'm from (Czech Republic) that no, merely downloading over BitTorrent still constitutes "sharing copyrighted material".
Presumably that was because BitTorrent sends data even before receiving 100% of it? But I assume that downloading these files would not be allowed in this case anyway as per Zákon č. 121/2000 Sb. §29 (2) since this is not a published work.
The only place I know where that would be the case is Switzerland, there downloading copyrighted material isn't illegal (and companies aren't allowed to track IPs of people downloading files via torrent), but sharing is. But in the context of a data leak of confidential trade secrets, that's likely to be a completely different situation.
Torrent is not the most private way of downloading things, because if you share your already downloaded binary you are posting your ip in a tracker as a leecher or seeder. You actually can see live what torrents (at least the most popular) are you downloading[1], the site is only tracking the most popular hashes, but is easy to some entity track this intel hash specifically
I think johnklos meant it only works in Chrome and not in Firefox or Safari. You can download the whole archive as zip with Chrome - no problem. Firefox, on the other hand, doesn't allow you to store that much data locally in the browser, so it doesn't work out of the box. You can download the two top level directories separately though and this works even in Firefox.
(At least the last time I had to download from MEGA, I RE'd what it does and it was somewhat clever - AES128 in counter mode, key is in the hash part of the URL.)
This is more embarrassing than harmful. Having worked at companies like intel, it's not really that damaging leaking some of this IP - the worst that happens is some open source project gets slightly better or you have a few more bugs (not that Intel are lacking in that area). The second we see internal marketing, pricing & road map slides- that's when you know they're in real trouble.
I've worked at a company very much like Intel¹ and the really closely guarded secret — the one where two vetted people turn the launch key at the same time — was the microcode patching keys.
No the worst is it there is some dirt and people find it like:
- copyright infringement
- patent infringement
- actual backdoors (the word backdoor does appear but there are many ways how it can already without they being a backdoor got spying including bad naming sense of engineers and code used during prototyping only
> the worst that happens is some open source project gets slightly better
If anyone is reading this is working on open source projects that would benefits from what has been leaked: stay far away from such a leak. The last thing you want is your open source project to be accused of copyright or patent infringement.
AFAIK the ME is required to initialize the processor so it can never be completely disabled. The best you could do is remove any code beyond necessary initialization which has mostly already been done by me_cleaner.
Quite straightforward, I used a ch341A SPI programmer.
Just make sure you take multiple copies of your original ROM image and compare the hashes of them to make sure there was no screwup.
It took me about 10 minutes to do my ThinkPad. All I lost was some enhanced integrated GPU power management and integrated thermal management, but I use a userland fan control program anyhow.
On some devices it's fairly easy: pop the chip out (or attack to it in-circuit with a clip), drop it in a programmer, run a tool.. run me cleaner.. run a tool again.
On other devices you just can't read the chip or you can and me cleaner can't make any sense of it.
The entire ME can't "technically" be disabled on modern Intel silicon. It's essentially the processor that "bootstraps" the whole CPU. Without (cryptographically signed) code running on the ME, the system can never boot.
All the non-necessary bits can be disabled out of the box, however.
My understanding is that the DOD has access to machines with ME disabled. What is the capability that is disabled there, I wonder and how is that different than ME cleaner? Are they doing basically the same thing?
source: They have a server hosted online by Akami CDN that wasn't properly secure. After an internet wide nmap scan I found my target port open and went through a list of 370 possible servers based on details that nmap provided with an NSE script.
source: I used a python script I made to probe different aspects of the server including username defaults and unsecure file/folder access.
source: The folders were just lying open if you could guess the name of one. Then when you were in the folder you could go back to root and just click into the other folders that you didn't know the name of.
deletescape: holy shit that's incredibly funny
source: Best of all, due to another misconfiguration, I could masqurade as any of their employees or make my own user.
deletescape: LOL
source: Another funny thing is that on the zip files you may find password protected. Most of them use the password Intel123 or a lowercase intel123
source: Security at it's finest.
... They're claiming it came from an NDA'd source of IP that's shared with customers.
Given that it _appears_ like there are backdoors in this Firmware code, we can conclude that if there are such backdoors then they were shared with numerous customers.
That really doesn't improve the optics of the breach.
Alternately, as others have noted, it could be overloaded nomenclature and doesn't actually indicate a backdoor. Which would be an excellent reason for them to feel comfortable sharing said 'backdoors' with their customers.
A guy on reddit claiming to be ex-Intel thinks it looks like material shared with OEM’s and thus a breach of something like a motherboard manufacturer rather than Intel.
Is releasing this legal? It seems like this person isn't really disguising their identity or concerned about breaking the law. In their profile they even seem to brag about leaking company's code.
Is the person publishing this liable or just their source? Because this seems to be a hobby for the person publishing it and yet they also aren't concealing their identity. They list their former employer on their website.
Misappropriating trade secrets for financial gain is a punishable offense, and this data would qualify as a trade secret, at least for as long as it's not general knowledge to everyone or it has yet to be reverse-engineered. Aside from that, much of the data in these files has standard copyright and patent concerns.
Wouldn't financial gain limit it to essentially their competitors? Which renders that part irrelevant and is kind of a "no shit" to anyone actually working in processors not to include their stuff in there.
At the very least, Intel owns the copyright on this material, so sharing it is a copyright violation in any country that is a signatory to the Berne Convention or the TRIPS Agreement, which is effectively almost the entire planet.
Then you have to add Trade Secret laws on top of that, which will have slightly narrower jurisdiction but still impact a lot of countries. There are very few places on earth where you would not be facing any legal trouble whatsoever for releasing this material.
It may take effort and money to be protected, e.g. setting up a legal entity in that country which takes full responsibility and you cannot be legally forcibly unmasked as being a proprietor, and other local laws may need to be fully checked out to explore other risks.
Nothing is a guarantee and perfection doesn't exist but it's fun to explore these legal layers.
Hi msbarnett: sorry unrelated to this thread.
In an older thread you mention an acronym TFA. The thread was a discussion on sparse files and removing bytes from the front of a file.
What is TFA?
For example in Russia you are basically guaranteed to not be prosecuted for all kinds of cybercrime as long as you don't have Russia/Russian companies or Russian citizens.
Most western countries agree that the concept of ‘intellectual property’ is a good thing and afford protection, or else society disincentivizes innovation due to game theoretic tragedy of the commons-type reasons.
Copyright law is fairly universal, so while stealing data/info from another company may not be illegal (depending on jurisdiction) copyright laws are pretty universal.
I don't believe this is accurate or in any way obvious even if this is the stance the courts would ultimately take. These files were downloaded from a publicly available CDN server discovered while browsing the internet. No authorization mechanisms were bypassed, no computer systems were hacked. These files are the result of a GET request to an Akamai server that happened to be hosting the files. Despite how this will be spun in pop culture, Intel did not secure access to these files. I'm not sure how you would prosecute someone for re-sharing a file they were given, under no legal contract, when they asked for it.
You have a lot of faith in how technically versed the law and courts are on these topics - because they sure haven't kept up with the times. And even if they were willing to split hairs over these technical details:
No civilian will agree with you that just because technically you could slip through several doors that happened to be not locked and got helpful advice from a neighbor, it doesn't mean that whatever you found behind those doors was "public" just because you didn't have to pick locks. Or that the photos you took of private company documents by social engineering your way inside must clearly be unsecured and publicly distributable because "they were given to me when I asked for them".
This isn't slipping through various open doors. There were no doors. This is literally a public server on the public internet serving files publicly. Intel is grossly negligent in securing their assets if they're hosting what they consider to be confidential trade secrets on public CDN servers.
The analog would be if I posted a flyer on a telephone pole with what I considered confidential information and someone else took a picture of it. There's no way you could argue that I had a reasonable expectation that only people for whom the flyer was intended would be able to view the flyer.
If someone deliberately bypassed computer security measures to acquire this information I'd agree. But you don't get a free pass to be negligent just because you're a big company. I suspect the EFF would support my viewpoint as they supported Weeve's appeal of a much more contentions and ethically gray scenario (the acquisition of personal information from a server that was negligently "secured" and required someone to imitate the calls an iPad would make).
Then search engines must not be legal. They crawl the public internet and index what they find.
What you’re effectively saying is that the flyer is unknowable unless a Street-view car drove past and snapped a picture of it and its owner engaged in SEO to make sure it landed near the top of search results.
There is no “house” in this analogy (which you might call a corporate/private network secured or otherwise). No private network was accessed. This stuff was on the street, in the free pamphlet section of the newspaper stand.
That’s a very weak argument. If I’m walking down the street at night and somebody comes up to me and says “GET /money”, I may respond with an HTTP 200, but that doesn’t mean the person didn’t just steal from me.
It's a fictitious example. I didn't say there was no weapon, nor said it was definitely theft. The point is that submitting a GET request in a public setting does not mean no crime.
Coercion can be the difference between asking for money and theft. In the case of this intel data, it was clearly coerced from a server - it's not like it was linked on Google, they had to specially craft URLs to coerce the data out.
For there to be theft a property owner has to lose their property and or use of said property.
There is zero theft in this discussion.
One could argue there was infringement, and that argument is very difficult to make without breaking the Internet for everyone just because someone with deep pockets failed.
Coercion involves, at the core, an act of agency performed to the intent of someone else, who is not the agent, actor.
Bad practice does not support coercion at all.
I wonder whether that word even applies to entities lacking agency.
Servers are automatons. They do not make value judgements and or creative acts of agency of any kind.
We need these things as a basis for coersion.
There are lots of things not indexed by Google and it is dangerous to imply people are somehow wrong when data is accessed sans a Google index.
Security by obscurity does not make sense. This mess is part of why.
Again, I don't believe it's accurate or honest to call this coercion. These files were obtained from a content delivery network by visiting a url in a browser. Nothing deceptive, cunning, crafty, or coercive about it. Let me ask you, what files am I allowed to access on a public network? Must I ask owners permission before visiting their websites? Must I be able to find it with a search engine? What constitutes a file which anyone is allowed to view?
This is incorrect. Actually, yes, the files were in fact browseable and Akamai servers typically front with DNS names that presumably resolve to their any-cast addresses where they use SNI to select content bucket, so there would have been a "friendly" name involved. Going to https://server.com/folder displayed a list of folders and files all hyper-linked and connected as is common on the internet. The fact that the server was initially discovered by way of a crawler, a scan, is irrelevant (this is actually how search engines discover content, btw). The fact that a browser could browse these files suggests that it is not "well beyond the realm of browsing public websites".
Exploration of public areas isn't illegal. There's no law mandating that viewing a website though the browser is legal, and any other means not. Techies legitimately access websites in all kinds of programmatic ways. Intel made their data publicly available. That it wasn't accidental doesn't change that.
There are often ways to beaches and other spaces unadvertised, but otherwise OK to use.
Sure looks to me like a potential landmine for people. Bad practice with big pockets should still just be bad practice with the same consequences for all who don't bother with better practices.
No it wouldn't. It would be like taking a picture of a dropped wallet on the street.
Regardless, under what moral framework are we operating such that obvious guilt is prescribed to anyone who might pick up a wallet in the street, anyway? Of what crime are they guilty?
Not quite, it was vacated on the grounds of improper venue. It wasn't reversed or similar; to be vacated is to be voided, as though the case never occurred.
I mean yes. I wish it was actually reversed on grounds that the ruling didn't stand. But that was the intention of the appeal. Dismissing it on improper venue is simply tactical. This is the legal system's way of saying, "there was enough contention in this case that we don't feel comfortable with the whole thing in the first place so we'll throw it out on a technicality and avoid inventing any case law here".
I'm not a lawyer, but I'm not 100% sure if thats's the best interpretation. It being thrown out on a technicality doesn't necessarily imply anything about their feelings regarding the facts of the case.
Basically, I would not be surprised that if the exact same case happened today, the defendants would still get jail time.
Stealing it is probably illegal, and there’s a copyright and export regulations argument to be made around copying it.
However, my understanding of the law is that, once secrets are made public, further distributing the secrets is not illegal.
So, republishing it is probably not more illegal than running a torrent of a Hollywood movie and an Ubuntu ISO (which can run afoul of export regulations).
Note: I’m not a lawyer, and if what I said was true in practice, Julian Assange / Wikileaks would have nothing to fear from the law.
Also, there are currently hundreds of seeders (according to a popular torrent indexers) and there were probably thousands of snatches. Good luck prosecuting that many people.
In addition to what other people already mentioned (how it is illegal), it may depend on jurisdiction. Also whether the country of origin of the source has an extradition treaty with the USA, or if the USA can otherwise (e.g. extrajudicial kidnappings) get the culprit to stand trial in the USA.
EDIT: While it maybe a relatively clear cut case according to US law, other countries (may) have different laws. There are also all kind of potential diplomatic and political obstacles, when this was done by somebody outside of the USA. For instance, good luck if this was a Russian or Chinese citizen.
May be a good time for Intel to open-source FSP anyway. They've been dilly-dallying around it for a while now. There were some phoronix articles about it a year ago.
It's most likely a callback from OS to firmware, or at least this is what I can guess based on the single comment present in the screenshot and what I saw in the past in the APEI tables of Intel-based servers.
APEI tables are a part of ACPI that tell the OS how to write an error record persistently in the machine log, inject a memory error for debugging purposes, and stuff like that that's tied to the RAS (Reliability/Availability/Serviceability) features of a server. The tables contain a list of instructions like "write a value to memory" or "write a value to an I/O port"; the way they work in practice is that, by following these instructions, the OS causes the processor to enter system management mode (that's the "backdoor" into the firmware) where the firmware services the APEI request.
Since the tweet mentions SMM and RAS in the two lines it shows, my guess is that it's related to that functionality.
This kid has been posting these for fame (it's the same guy that posted the Daimler leak). I guess it's all fun and games until he finds himself in prison
That is indeed often the case with young narcissists (I don't know if it applies to this person, don't know him/her).
That said, I remember the shocking arrogance and total disregard (for anything but their own ego) of a few young privileged "hackers", who were involved in DDOS services for hire, and also for some very nasty IoT bot net (if I recall correctly).
Krebs wrote about them quite a bit. I think they even got caught because of that, but not sure. They did loads of real damage, that much is certain. But instead of going to jail, they got community service. Apparently with intervention of the US government, for which they now work. Go figure.
Not per-se, indeed. But if that urge for validation is for something that's fundamentally wrong and/or only supports an person's failure to critically assess their own actions, then it usually is narcissism.
I'm not sure you can even consider it "breaking in", it's more like tweeting that under intel.com/super-secret url you can see some internal, secret documents.
The definition of "unauthorized access" is intentionally very broad, and ultimately depends on the kind of lawyer you can afford. Publicly taking a piss in the face of Intel and Daimler in exchange for a little lame publicity seems an incredibly dumb tradeoff
With stuff like this being exfiltraded (let’s admit if hackers got this they prob could have a whole ton of fab secrets) it won’t be long until America’s IP is all in the hands of China/Russia/Europe.
We will have confirmation when China launch a ‘Xi Lake’ x86 compatible cpu...
Any IP lawyers in the house willing to speculate on how this is going to go down? Intel surely isn't going to let this stand, and the (Swiss) leaker is being completely open about their identity. What's the legal action going to look like?
Bigger market cap but something like 10 times less revenue, strange world (As intel get hammered in the media their revenue remains in a different league to AMD - which I suspect is partly because AMD can walk the walk after dropping their trousers but there is no foreplay [i.e. sales and software])
The three "biggest deals" here are all... a lot less important than they look. Clarifying info on all three:
"Did Intel get hacked?"
I can't confirm the exact mechanism by which these files got out, but I do know that these files are things which get shared externally already with Intel's customers under NDA. If security in general is lax, that's one thing and future hacks of more sensitive stuff could be expected. If security in general is fine, but for some NDA customer sharing channel is lax, don't expect to see anything juicier.
"Intel123 is an awful password."
Yes it is, but... it's not for security. Intel123 is the password used to bypass executable/script filtering systems that overzealous IT put in place to "protect" employees. Employee A wants to share a zip with employee B. There are many channels they can use to do this, because the contents of the zip are not encrypted or restricted. None of these channels require encryption, but either A or B doesn't like/understand them, so they agree on email. Whoops, the filter says that executable could be harmful and out it goes. Zip-via-email doesn't work. Unless... well, if they put a password on it, the filter doesn't catch it. Good. Problem solved. This is so common that the convention Intel123 arose and solidified for exactly this purpose.
"I see the word 'backdoor' in there!"
Sure. Bad name choice. That's not the kind of backdoor you're thinking. There are a lot of things in the firmware that take this exact same form and don't use the word backdoor. It's a signal the low level firmware is keeping an eye out for, and if received, it will trigger some other piece of firmware to do some task in SMM. If that other piece of code takes input parameters and fails to verify them, then you may have a vulnerability on your hands - in fact, this was a very common kind of vulnerability before. Intel has fixed a lot of these over the years. Odds are they're mostly gone by now. If input parameters are verified (or none taken), the worst you could do is maybe a DoS by spamming that signal to keep the CPU clogged/stuck in SMM.
In what ways can an end user of intel processor expect to benefit from this? I'm guessing none, since ever consumer interface is already a standard ... Can anybody chime in?
While it doesn't mean it will happen, depending on what is leaked now and in the future, possibilities include:
1. Verify that debug features that are remotely exploitable are actually disabled in consumer releases of their hardware.
2. Re-implement proprietary parts of the boot sequence, such as activating memory controllers, in an open and public manner that can be more easily looked over for flaws, security and otherwise.
3. Modify parameters and tweak hardware for additional stability or performance enhancements, especially undocumented or disabled(on lower graded chips of the same architecture) aspects of the hardware that may be present.
On the other hand barriers include legal issues depending on what country people working on these originate from, ethical issues, and even industry barring, and this is not exhaustive. Consumers, especially consumers in countries not concerned about the legal aspects will likely gain the most advantages, if any are present.
> but some of that code might help Coreboot development
Unlikely.
Most projects won't come anywhere near this sort of thing. There may be a possibility of doing clean room implementation, but writing the spec based on stolen IP is the problematic step.
Then again, there is a high chance that none of this will be useful.
It will be more or less impossible to prove or disprove that anyone obtained some crucial information from there. The info will always somehow make it's way into the places it's needed eventually.
It doesn’t matter if it’s provable or not, most developers won’t risk it especially if they want to keep their jobs or be hireable.
If you review the content and publish say a blog post, even without legal repercussions it can impact your ability to be hired in the future since everything you do from that point can be tainted.
So if you do look you should keep it quite or publish it under a pen name that you can’t ever take credit for.
My point was that even without anyone taking that risk, the information will spread.
Someone reads the code, mentions it to a friend, who adds it to a blog post, which gets cited in a wiki, which gets read by a developer unaware of the source. If the information is useful, it will end up getting spread.
Say we use the Microsoft Windows code that got leaked, was anyone black listed for that?
Also, I would assume other processor companies hire people from other processor companies and everyone all wants the best, most of the basic knowledge would have already made it's way to AMD and other companies.
But that isn’t basic knowledge, if you work in firmware development, embedded, SOC design etc. and your employer or future employers might be competing against Intel in some market segment (which given the sheer amount of products Intel has isn’t an unlikely scenario) I would be very careful about admitting not to mention publishing content based on this leak.
If you work in a completely unrelated field then you don’t need to care as much.
I’ve only seen people writing “backdoor!” without actually saying what kind of, for who, to what and so on. Seem pretty disingenuous to me. Could easily be something trivial.
The FSP source code is supposedly leaked as part of this, which is used to initialise the memory controller. Are we closer to (modern) blob-free Intel platforms?
CloseR, yes. Close? No. For one, memory init code differs from product gen to product gen and pulls in platform/board specific libraries and inputs to set up some parameters. The bigger problem though is just how big and messy the memory init code is. It would take a substantial number of people a substantial amount of time to unwind and understand what's going on, let alone do a sane and/or clean-room implementation of it all.
Personally it seems more like complacency and cultural rot has caught up to them than any bad actor - excluding their own management chasing ego gratification or short term profits. Falling behind AMD in so many metrics when they were previously often a second-best rival screams that they need to get their shit together.
In my experience password protected files are often password protected for obscure reasons which have nothing to do with the intent of keeping them secret, like:
- prevent anti virus from messing with it
- keep to some obscure regulations wrt. Contacts or law, where is enough if you can argue the data was encrypted.
If you make zip files in a company, there is never a repository of passwords, because that would be insecure, ergo zip files with other passwords usually are not easy to unzip after 5-10 years when the owner is dead/gone and the passwords are lost.
This type of passwords are use in almost all big corporations. People are being asked to encrypt things but without password managers or keys management tools.
> People are being asked to encrypt things but without password managers or keys management tools.
That doesn't really make me think better of the company; if the company fails to support secure workflows, it's still on them when people fail to use secure workflows.
It’s mentioned in the Twitter thread that at least some of the files have a password of “I accept” instead. That leads me to believe that the primary purpose might just be to indicate agreement to an NDA.
The number one (by a wide margin) reason for Intel123 is that somebody is trying to email a zip to somebody else, but a mandatory filter notices "bad files" (oh no, executables!) inside the zip and removes it to keep people "safe". So the zip gets a common, known password, and the recipient gets their files in peace. It's not a security measure at all. It's a workaround for braindead IT "solutions" hindering day-to-day operations. The files can be shared via any number non-encrypted channels just fine, but the particular employees trying to share happen to be most familiar with email and the filter doesn't know or care if there are secrets - there are EXECUTABLES! Those are dangerous, don't you know?
For the life of me, I can't understand why people insist on making passwords with the name of the company in them. It's so absolutely stupid, but common.
I'd love an IPR lawyer to explain legal paths to clean room spec of the bits of this which could be useful like ME or coreboot depending parts.
I see comments which says "stay clear, they will" but I would like to know how, if at all, this could be done and be legal on the receiving side of the functional spec from a clean room.
if you're at a company that can be considered an Intel competitor i would avoid this like the plague. Wasn't there problems for people working on Linux after only viewing source code from other operating systems?
Unless they've actually found a real smoking gun, probably not even close. Besides, even if there isn't a backdoor in intel CPUs they've definitely tried.
Why would they include stuff from proprietary releases?
I understand exposing backdoors and all, but who cares about a camera firmware for a airgaped system?
wonder if some of the clients for those devices is involved and the goal of this is that those clients got fed up with the NDAs and wanted all this in "public domain"?
Can't say I'm surprised, people are lazy.
Another large tech company I used to work for commonly used an only-slightly more complex password. But it was never changed, so people who had left the team still could have access to things if they knew the password. It was an entry point into the system more than the company's Red team.