This is pretty neat. There have been multiple attempts at something like that in the past, and of them, this is by far the best-looking one.
However, I think some artefacts can never be properly resolved (see for example the marble statue to the left at 1:53 in the video), which makes it look veird and break immersion (also the flat roofs that are supposed to be slanted/angled roofs)
Also, the U7 engine is a very complex beast so to properly implement it will take a lot of work and fine-tuning (although I guess they can use Exult as a start, which is by now pretty feature-complete)
The default palette essentially is IBM's Color-blind safe[0], which does provide some 'safe' defaults. IBM's design guidelines provide some sound advice for color use too [1].
Agree completely, the title is misleading. I was amazed by it by just watching the video, but this is actually a fan project. Which is cool, but not a prototype of the original game as the title implies, or something that should be "recovered".
Reminds me of a ROM-hack for Zelda: Ocarina of Time from a few years ago, that was presented in the release video as using unused assets and storyline from the game itself, when it was actually almost entirely new material. A great technical achievement, to be sure, but somewhat dishonest in its presentation.
no, the FTL is still in the SSD unless it's a host-managed SSD which is also operating in host-managed mode, which none of the articles have mentioned to be related to the issue
No, some SSDs use host memory buffer (HMB) to cache FTL tables. If the FTL cache gets corrupted, and that causes critical data to be overwritten, that could brick the SSD. For instance, if the FTL table was corrupted in such a way where a page for a random file is mapped to the page for the SSD's FTL (or other critical data), and the OS/user tries to write to that random file.
Yes, which is why they're cheap(er). It's better than the alternative of using flash instead of going out to system RAM, but DRAM-less SSDs are still the cheap option; HMB is a mitigation, and not a complete fix.
The FTL executes on the SSD controller, which (on a DRAM-less controller) has limited on-chip SRAM and no DRAM. In contrast, a controller for more expensive SSDs which will require an external on-SSD DRAM chip of 1+GB.
The FTL algorithm still needs one or more large tables. The driver allocates host-side memory for these tables, and the CPU on the SSD that runs the FTL has to reach out over the PCIe bus (e.g. using DMA operations) to write or read these tables.
It's an abomination that wouldn't exist in an ideal world, but in that same ideal world people wouldn't buy a crappy product because it's $5 cheaper.
One of the Japanese sites has a list of SSDs that people have observed the problem on - most of them seem to be dramless, especially if "Phison PS5012-E12" is an error. (PS5012-E12S is the dramless version)
Then again, I think dramless SSDs represent a large fraction of the consumer SSD market, so they'd probably be well-represented no matter what causes the issue.
Finally, I'll point out that there's a lot of nonsense about DRAMless SSDs on the internet - e.g. Google shows this snippet from r/hardware: "Top answer: DRAM on the drive benefits writes, not reads. Gaming is extremely read-heavy, and reads are..."
FTL stands for flash TRANSLATION layer - it needs to translate from a logical disk address to a real location on the flash chip, and every time you write a logical block that real location changes, because you can't overwrite data in flash. (you have to wait and then erase a huge group of blocks - i.e. garbage collection)
If you put the translation table in on-SSD DRAM, it's real fast, but gets huge for a modern SSD (1+GB per TB of SSD). If you put all of it on flash - well, that's one reason thumb drives are so slow. I believe most DRAM-full consumer SSDs nowadays keep their translation tables in flash, but use a bunch of DRAM to cache as much as they can, and use the rest of their DRAM for write buffering.
DRAMless controllers put those tables in host memory, although I'd bet they still treat it as a cache and put the full table in flash. I can't imagine them using it as a write buffer; instead I'm guessing when they DMA a block from the host, they buffer 512B or so on-chip to compute ECC, then send those chunks directly to the flash chips.
There's a lot of guesswork here - I don't have engineering-level access to SSD vendors, and it's been a decade since I've put a logic analyzer on an SSD and done any reverse-engineering; SSDs are far more complicated today. If anyone has some hard facts they can share, I'd appreciate it.
I dont buy this. There are plenty of dramless SATA SSDs which should be impossible if your description was correct, not to mention DRAMless drives working just fine inside USB-NVME enclosures.
>but gets huge for a modern SSD (1+GB per TB of SSD)
except most drives allocate 64MB thru HMB. Do you know of any NVME drives that steal Gigabytes of ram? Afaik Windows limits HMB to ~200MB?
>Finally, I'll point out that there's a lot of nonsense about DRAMless SSDs on the internet
FTL doesnt need all that ram. Ram on drives _is_ used for caching writes, or more specifically reordering and grouping small writes to efficiently fill whole NAND pages preventing fragmentation that destroys endurance and write speed.
Are you talking about the fact that NVMe works by MMIO and DMA? So is pretty much any SATA controller, so there's no inherent difference there (there are _many_ years since the dominant way of talking to devices was through programmed I/O ports). Unless you have a NVM device with host-backed memory (as discussed elsewhere in the thread), it's not like the CPU can just go and poke freely at the flash, just as it cannot overwrite a SATA disk's internal RAM or forcefully rotate its platters. It can talk to the controller by placing commands and data in a special shared memory area, but the controller is fundamentally its own device with separate resources.
FWIW the idea of inspecting the certificate "for typos" or similar doesn't make sense. What you're getting from the CA wasn't really the certificate but the act of signing it, which they've already done. Except in some very niche situations your certificate is always already publicly available when you receive it, what you've got back is in some sense a courtesy copy. So it's too late to "approve" this document or not, the thing worth approving already happened.
Also the issuing CA was required by the rules to have done a whole bunch of automated checks far beyond what a human would reasonably do by hand. They're going to have checked your public keys don't have any of a set of undesirable mathematical properties (especially for RSA keys) for example and don't match various "known bad" keys. Can you do better? With good tooling yeah, by hand, not a chance.
But then beyond this, modern "SSL certificates" are just really boring. They're 10% boilerplate 90% random numbers. It's like tasking a child with keeping a tally of what colour cars they saw. "Another red one? Wow".
It's possible that this was just a slight imprecision of language, and the thing being inspected is the CSR rather than the actual certificate. (But the point about individual certificates/CSRs being unworthy of human attention is totally right.)
That's true, although inspecting a CSR is also daft because much of the CSR is actually ignored by the CA so you can "check" it but if it was "wrong" that makes absolutely no difference to anything.
The CA is going to look at the requested names (to check they were authorized) and they'll also copy the requested public key, this combination is what's certified. But if your antiquated gear spits out a CSR which also gives a (possibly bogus) company name and an (maybe invalid) street address "checking" that won't matter because the CA will just throw it away, the certificate they issue you isn't allowed to contain information they didn't check, so that part of your CSR is just tossed away without reading it.
There are CAs which to this day have been caught issuing certs that don't match CSRs, because the CA uses a manual process of hand-copy or hand-typing fields from the CSR into the new certificate.
So even reviewing CSRs won't help you.
(The solution of course is to automate your cert request/issuance, which has the side effect of ensuring no human is involved in the cert process)
There are environments where it's required that _every_ environmental change has an associated CCB ticket. In that kind of environment, yes, every new cert is attached to a ticket that the board has to review and approve.
Yes, it's insane, but it sure makes fault analysis easier when the environment is that locked down and documented.
But they don't require changes for altering the state of RAM on the server, or data in the database. So clearly not every change requires an associated ticket. Certificate renewal is part of the regular operation of a device, not a change to that operation.
Sounds so similar to something we had set up when I worked for a major retailer a few years ago. In order to get a cert you had to email the security team or some junk like that and THEY would go through the digicert UI. I stopped reading the absolutely giant and incredibly confusing certificate support document and swapped everything I was responsible for to ACM.
Side note, at some point I got an email telling me to stop issuing public certificates and only issue private certs. I had to get on a call with someone and explain PKI. To someone on the security team!
reply