i believe the latest rumors suggest 16GB unified GDDR6, but the relevant tech here might be sophisticated caching technology that also leverages a fast PCIe 4.0 SSD
PCIe 4.0 is in the order of Gigabytes per second. Hundreds of billions of triangles would demand hundreds of Gigabytes. There is no way they can calculate this stuff in real time (and tbf, they never claimed that they did).
So presumably, the engine now does that kind of stuff at load-time. The bottleneck is then the place on the SSD and the scene change pace. This fits quite nicely with the repetitive statures, but it leaves a question mark for that final flight.
I don't know. The police in the city of Basel bought seven Teslas. It seems this wasn't OK regarding data protection. And I don't know if it's OK now or if the police were just allowed to keep the cars because they had already bought them. According to an article, written in German, the police had to replace the SIM cards inside the cars with Swiss SIM cards [1]. IMHO this looks (partly) like a pseudo action because there is still an internet connection. (However, according to the article the cars are considered to be OK now.)
I wouldn't buy a Tesla and I wouldn't drive one if it was given to me free. Despite the legal property of a car title, you can't really own a Tesla, they control the software "activation" and "license" that it won't operate without. And even if you don't use the Autopilot sensors, Tesla uses them to collect data for itself.
Waymo would have all the same problems, except I doubt Waymo is targeting cars people would buy at all. Google seems set on delivering a transportation service. It's questionable if they'll ever sell their system at all.
Telsa is outright hostile in this regard, and several others. Regardless of if they make a good car, you have to accept a lot of negative things to purchase one.
this was fixed in software a while ago, even my old 2011 (?) Mac mini no longer needed the dummy hdmi dongle connected to have aresponsive/hw accelerated UI via remote access
Im not sure I agree; a set of $700 wheels won’t make anybody who can easily afford them feel like struggling customers again.
Wealthy people that I got to know would likely feel offended by the price tag in relation to the functionality.
The nouveau riche crowd looking to display status on the other hand...
All current Macs include a T2 chip, which is a variant of the A10 chip that handles various tasks like controlling the SSD NAND, TouchID, Webcam DSP, various security tasks and more.
The scenario you mention — a upgraded "T3" chip based on a newer architecture that would act as a coprocessor used to execute ARM code natively on x86 machines — seems possible, but I don't know how likely it is.
Yeah, but what would be rationale? They want to avoid x86 as a main CPU, so either you'd get an "x86 coprocessor to run Photoshop" (let's go with the PS example here).
Or you'd have to have fat binaries to have x86/ARM execution, assuming the T3 chip would get the chance to run programs. Now either program would have to be pinned to an x86 or ARM core at their start (maybe some applications can set preference, like having PS be always pinned to x86 cores) or have the magical ability to migrate threads/processes from one arch to another, on the fly, while keeping the state consistent... I don't think such a thing has ever even been dreamed of.
I don't think there's a chance to have ARM/x86 coexist as "main CPUs" in the same computer without it being extremely expensive, and even defeating the purpose of having a custom-made CPU to begin with.
An x86 coprocessor is not that outlandish. Sun offered this with some of their SPARC workstations multiple decades ago, IIRC.
Doing so definitely would be counterproductive for Apple in the short-term, but at the same time might be a reasonable long-term play to get people exposed to and programming against the ARM processor while still being able to use the x86 processor for tasks that haven't yet been ported. Eventually the x86 processor would get sunsetted (or perhaps relegated to an add-on card or somesuch).
Either if it's for performance, battery life or cost reasons, it wouldn't really make sense:
a) performance wise, they move would be driven by having a better performing A chip
b) if they aimed at a 15W part battery life would suffer. 6W parts don't deliver good performance.
c) for cost, they'd have to buy the intel processor, and the infrastructure to support it (socket, chipset, heatsink, etc)
Specially for (c), I don't think either Intel would accept selling chips as co-processors (it'd be like admitting their processors aren't good enough to be main processors), nor Apple would put itlsef in a position to adjust the internals of their computers just to acomodate something which they are trying to get away from.
Apple probably doesn't need the integrated GPU, so an AMD-based coprocessor could trim that off for additional power savings (making room in the power budget to re-add hyperthreading or additional cores and/or to bump up the base or burst clock speeds).
> for cost, they'd have to buy the intel processor
Or AMD.
> and the infrastructure to support it (socket, chipset, heatsink, etc)
Laptops (at least the ones as thin as Macbooks) haven't used discrete "sockets"... ever, I'm pretty sure. The vast majority of the time the CPU is soldered directly to the motherboard, and indeed that seems to be the case for the above-linked APU. The heatsink is something that's already needed anyway, and these APUs don't typically need much of it. The chipset's definitely a valid point, but a lot of it can be shaved off by virtue of it being a coprocessor.
I do not use Linux on my desktop or notebook but I would like to do so in the near future. This means I should strive to keep all my user created data within the home directory at all times?
How about program preferences and configs that get saved elsewhere by default?
I imagine having ZFS snapshots of / would be useful for updates going forward.
I've gone through 3 laptops now with a migrated home. That included ubuntu -> arch -> fedora migration too. It works pretty well. (with more issues between distros than between version upgrades)
> This means I should strive to keep all my user created data within the home directory at all times?
Why would the user have privileges to save it outside of the home directory? :) Special cases like databases with storage in /var need to be handled separately.
> How about program preferences and configs that get saved elsewhere by default?
Nothing goes (should go) elsewhere by default.
If you're wiping the distro and re-installing - you probably don't want to keep /etc anyway.
Fwiw Ubuntu in place lts to lts release upgrade should be solid. But sometimes you want start with new, contemporary defaults, or a different disk layout.
BTW with zfs, you can have a separate home (or home/your_user) filesystem, and not worry about allocating fixed space (thus running out of free space on /, but with more available in /home and vice-versa).
Yes, there's also the topic of unrecoverable read errors (URE) and their effect on successful RAID rebuilds. [1][2]
Most consumer drives are still sold rated as <10^14 bits read per error. That's 12.5 terabytes, so in the worst case you could end up in situations where — on average — you are unable read a full 16TB drive without an error. Needless to say this is less than optimal for rebuilding a failed RAID array.
Anecdotal evidence (i.e. very low error rates during ZFS scrubs) suggests that manufacturers underrate their drives and they are much more reliable than that, but it is something to keep in mind.
Fortunately drive capacity completely outpaced my needs for personal data storage in the recent years, so I am happy with JBOD or RAID1, with backups of course.
> Anecdotal evidence (i.e. very low error rates during ZFS scrubs) suggests that manufacturers underrate their drives and they are much more reliable than that, but it is something to keep in mind.
I've seen plenty of drives returning invalid data with correct CRC over the years. On reliable server-grade Xeon + ECC hardware. Of course the vast majority of drives never do it, there's just no way to know which ones do until it happens.
Firmware bugs in weird corner cases? Cosmic rays? Perhaps, but I think it's more reasonable just consider it one of those weird things that occasionally Just Happen (TM) and just need to be protected against at a higher level.
All drives produced in the last 30 years or so are running ever more complicated software stacks. For example they all have features that move data at risk to safer locations without the host system requesting or even knowing about it. Their physical (like bits on spinning rust or NAND flash block) and logical (what the host sees) data representations can be completely different.
Plain old CRC errors, though, are way more frequent. I feel much more comfortable about those errors, at least the drive knows the data is corrupted.
> Intel is selling it as a way to keep secrets safe inside the processor against attackers with root/hypervisor software access or even physical access. Of course, a bevy of attacks in the recent months have demonstrated that this isn’t really achievable given the extremely large attack surface.
As a layman I have to wonder, should we expect similar attacks on Apple's Secure Enclave in the future?
It greatly helps Apple that T2 is a separated chip specially designed to do one function well, that is to do crypto in a secure way even in presence of physical attacks. How to do that has been known for quite some time. For example, modern SIM cards or cards for satellite tv are very secure and a physical attack is possible if one is willing to spend like over 100K per card.
What Intel is trying to do is to allow a general purpose secure computing with minimal extra cost. This is relatively new and as various bugs demonstrates may not even archivable. I.e. it may be possible to create provably secure chip, but its cost will make it a niche product.
Yes, MCU with intentionally hardened flash blocks are what those firmware recoverers specialize. They do things like gemalto chips sim and credit cards.
It looks to me that having a standalone chip is not great in general due to hardware attacks: you can easily MITM the system bus for example. Whereas a number of attacks become much harder once you use an integrated secure element.
The form factor of the iPhone of course almost makes the T2 secure enclave an integrated secure module. I also don’t think hardware attacks are really considered anyway (and as we see most researchers focus on software attacks)
Apple's Secure Enclave is a coprocessor designed specifically to reduce attack surface, and minimize the surface area of untrusted code.
It physically separates the ephemeral secret-storing (touch/face ID) and the hardcoded crypto keys (not even the SE firmware has access to the key material, it's just allowed to run the circuits).
Interesting question would love to read some insights about that too. From my really basic understanding Apple Secure Enclave is a co-processor so other rules should apply but I'm also a poor layman in hardware design.