As Matt Levine likes to say - everything is securities fraud.
> contributing to global warming is securities fraud, and sexual harassment by executives is securities fraud, and customer data breaches are securities fraud, and mistreating killer whales is securities fraud, and whatever else you’ve got.
Nah, every bad thing a company does is securities fraud. With regards to this, this quote is more relevant.
Maddeningly, people continue to think that it is somehow illegal insider trading? “You bought a thing without telling anyone, and then you told people that you liked the thing and it went up, that’s illegal.” I don’t even understand that intuition. No! You can trade when you know your own intentions, even when nobody else does! Also, man, it’s Bitcoin, there’s no such thing as insider trading.
A good number of comments here mention dealing with the GPU is going to be a major hurdle. What makes porting GPU drivers significantly more challenging than everything else?
They're very complex, very stateful devices which also run their own compiled shader code. Not to mention auxiliary DSPs like video decoders (not sure if M1 has it as part of GPU or a separate block), power gating control and many many more.
They may have in order of 100 registers to talk to them and they're horribly proprietary with pretty much no standardisation.
Reverse engineering that is hellish at best - you can see projects like nouveau which barely manages to get nVidia cores up and working without help from the manufacturer. And that's after years and years of development.
The hurdle Noveau is facing is that some things, like reclocking, need firmware loaded onto the card. The firmware is not in non-volatile memory on the hardware, but a file shipped with drivers, the one shipped with proprietary drivers is not redistributable and if you wanted to make your own, it needs to be signed by Nvidia anyway.
That's pretty much game over for Noveau, and it is not due to difficulties in figuring out registers and NV ISA.
> What makes porting GPU drivers significantly more challenging than everything else?
Multiple reasons:
1) GPU manufacturers are notorious for not publishing documentation out of IP/patent concerns. Worst offender is NVIDIA here.
2) For embedded GPUs there isn't much interest in open source drivers... the big customers (think Samsung and the likes) have direct support from the chip design vendor and get drop-in drivers as part of the board support package (BSP, basically a conglomerate of bootloader, kernel+modules+initrd, firmware blobs for components such as wifi) so they don't need OSS drivers
3) The mobile GPU space is... splintered. With desktops you got the three major players AMD/ATI, NVIDIA and Intel's built-in Iris, in the GPU space there are more.
> GPU manufacturers are notorious for not publishing documentation out of IP/patent concerns. Worst offender is NVIDIA here.
I think easily Apple takes the cake from nVidia - they don't even provide drivers for anything but their platforms (that is for their proprietary GPU core). The GPU core that's actually in the M1.
A lot of this comment I don't understand how it applies to the Apple M1. I'm not saying it doesn't. I'm completely ignorant of these things. Am I just missing it?
Apple's M1 chip has a custom GPU built into it. There is no documentation on how that GPU works and Apple hasn't released any.
Making any modern GPU work is a lot of work because of how complicated they are. That's even with the full documentation.
In the Apple M1 case, the GPU will have to be reverse engineered to understand how it works, then a driver will need to be written for Linux that supports it.
In 11 years of using it, I've never seen AWS increase the price for a service whose cost inputs they controlled entirely. It can happen, though. They increased AWS Pinpoint costs because their cost to send SMSes in India increased by 25% due to a regulatory action that increased the price for all SMS traffic in India.
Often, AWS's prices are _really_ high for new services. I think this is useful for two reasons: it ensures your early adopters are those who get the most value, and it gives you a big buffer when discovering how much it actually costs to operate. This usually guarantees that prices have only one way to go. For example, AWS IoT Device Management had a 90% price cut after it was introduced.
> Often, AWS's prices are _really_ high for new services. I think this is useful for two reasons: it ensures your early adopters are those who get the most value, and it gives you a big buffer when discovering how much it actually costs to operate.
I've thought about this concept a few times lately.
Like, I get the impression that a savvy business, when starting a new product with very little competition, should start with prices as bad and guarantees as low as they can get away with, because they can always improve their offer later, whereas going the other way runs into loss aversion and consumer alienation.
Eg it's better to start at 40$/month and lower to 30$/month after a year than to start at 20$/month and get a lot of angry users when you realize you need 30 to break even.
No, they have stopped reducing the prices of existing services once they become depricated, so they effectively become more expensive compared to other services but never increased.
Although reselling other products, such as RDS Oracle or MSSQL could potentially have increased if the licences fees went up.
But I don't think that is what you are referring to.
> This is only likely to make it slightly harder to track the small fraction of users already taking strong measures to prevent being tracked.
This statement isn't correct. That small fraction of the user would've already turned off IDFA tracking. With this move Apple is merely prompting the user before turning the tracking on. I'd say FB is worried about the larger fraction of the users who aren't privacy conscious.
Compute + DB should cover a lot of use cases, but it’ll be great to have support for more AWS services. DNS (Route53) issues are a pain to deal with for example.
A route 53 propagation issue caused a bunch of our web servers to be unable to connect to the read replicas of our RDS cluster. The cluster scaled in a RR, but the DNS change took hours to propagate. The web servers kept trying to connect to the replica that didn't exist.
Our workaround was to put the IP address in the hosts files - not an ideal setup, but it got the job done.
What do multiple monitors have to do with having a dev env on EC2?
But on a similar note, I use VS Code's Remote SSH plugin to connect to an EC2 on one monitor and terminal + browser on the other. It's not my primary dev env, but it's great for experiments and side projects.
If I understand your comment correctly - even though the fingerprints are published, the attacker can still reverse eng the implementation from the tools and bypass antivirus systems at least in the near future?
Also fingerprints will only stop the lowest level of attackers. You can easily change binaries in a way the fingerprint is changed but the functionality remains the same. Reorder functions, add some garbage data, etc.
The biggest advantage is that it would allow orgs to audit all applications that have been fingerprinted within their org and see if they might have been attacked as well.
Some of the fingerprints are easily gotten around by fudging the binaries a bit. Others, like snort rules, look at things like network traffic that might not always be so easily disguised.
A nation-state actor likely already knows most of (if not all) of the techniques being used by FireEye. If they were really a nation-state actor then they were likely after the insight into sensitive networks rather then the tools imo.
My guess is that this particular outage was caused due to a scaling constraint somewhere in the stack.