This is one of the problems with Singletons. Especially if they end up interacting or being composed.
In Java you’d have the static initializers run before the main method starts. And in some languages that spreads to the imports which is usually where you get into these chicken and egg problems.
One of the solutions here is make the entry point small, and make 100% of bootstrapping explicit.
Which is to say: move everything into the main method.
I’ve seen that work. On the last project it got a little big, and I went in to straighten out some bits and reduce it. But at the end anyone could read for themselves the initialization sequence, without needing any esoteric knowledge.
There's a good and bad way to be the manager that encourages taking ownership. You're skeptical of the lazy type. it's masked also as "delegation". ICs tasked with doing literally everything, and yes it can get ridiculous, but even then the ICs do learn outsized skills.
The good kind is in medium to large companies there's definitely middle management needs that must be done. A good manager is doing and shielding you from those things. The ownership is about domain, skill, and project expertise. People closest should be in positions of ownership. A manager is the grease in the machine.
Area. Golden Cove (12-series) and Raptor Cove (13-series, except for some of the lower SKUs which get rebranded Golden Cove) are obscenely massive. It is something close to 2x the area of zen3 per core, which is not even on a 5nm tier node like Intel 7! and logic density went up 1.8x between TSMC N7 and N5, so this means something like 3.2x the transistor count at the high end. Achieved shrink will be a bit lower, but let's say 3x the transistor count of zen3.
And probably this tends to understate things because that's Golden not Raptor cove. Intel went obscenely huge on caches with Raptor Cove too - they did the same thing as NVIDIA with Ada, and dumped an assload of L1/L2 cache on it too. I don't know off the top of my head but let's say 10-20% bigger for Raptor cores.
In contrast Gracemont is much smaller - it is not quite "4x" as advertised, 4 is actually the number of cores in a Gracemont CCX/cluster, but the cluster is somewhat bigger than a Golden Cove core. So the actual core area is 3.26 Gracemont cores per Golden Cove core, and again, Raptor Cove is significantly bigger.
--
So the tradeoff is like - they could have done a 12P0E or something like that, for about the same area as a 8P12E. Which would still lose to a 16P0E Zen3 in multithreaded workloads, most likely.
That's the game Intel is playing - 8P is generally enough for games, but it's area-inefficient to keep scaling like that. But you have these other bulk tasks that just like tons of cores and don't care about peak performance, so, you have a mix of both. The E-cores give you more perf/area and the p-cores give you more peak perf for games/etc. So theoretically it's the best of both worlds, it's not as slow as a full e-core chip would be but it has a lot more MT performance than an all-P-core would.
Unspoken underlying problem being that Intel's P-cores are much, much, much bigger than the competition. Hence they have a much greater need to come up with a "compact" alternative than AMD does. Using that 1.8x logic scaling factor (which is optimistic), a Zen3-on-5nm design would be 1.72mm2 which is just about the same size as Gracemont. So Intel's "e-core" is about as big as AMD's p-core! Hence why they are much more focused on a whole new core design, where AMD just densifies the existing one (high efficiency/high-density libraries, reduced cache, back to 4-core CCX, etc). Squeeze that last 30% and call it a day.
On a more tactical level, I think it also is a move to force people to use Gracemont and start writing code for it. Long-term, your P-core being 3x the size of your competitors' is not sustainable and they need to pivot away from the existing P-core design (lakes/coves), it obviously is just a mess internally from 3 decades of tech-debt. Nobody really cares about the atom chips, despite them being pretty good for a long time now (my J5005 NUC made a great thin client during the pandemic, I use them for HTPCs, etc). Well, now you have to care, or you're leaving performance on the table on the mainstream intel chips. It's not just "intel loves big.little" or "needs big.little for area" but also "big.little" is a way for them to start getting the "little" cores into running real-world code, because in the long term they need to kill the coves off (and perhaps replace them with a mont-derived alternative).
(my suspicion is that this is a case of Conway's Law in action, and the architecture of the Lake/Cove family resemble the Intel organizational chart, and since Intel is a giant knot, that's the processor architecture they produce, and they've been doing that for at least 20 years. In hindsight Pentium 4 was the warning sign of the internal rot, they got it back together for a while but after the sandy bridge era they collapsed and everything since then is probably just more and more tech debt and kludges stacked on.)
--
Also, frankly, the e-core's "CCX" design makes sense. Tiering your interconnect/cache is what AMD has done very successfully - you have 2 CCXs per CCD (on zen2), 8 CCDs per socket. And that lets them decompose the interconnects into manageable pieces - 4 cores per CCX is a simple all-connected topology. Those talk to 4 quadrants on the IO die, which is a simple topology. If you want to talk to the other CCX, you have to go through the quadrant/IO die, so there is no "special case" there. It's all just a composition of simple pieces.
Ringbuses get annoying/inefficient past about 10-12 cores, which is why Intel abandoned them in server after broadwell-EP (with its "dual ring" design). But a mesh of individual cores also has this huge latency penalty, and consumes a bunch more area, and (in practical configurations) still tends to be very bottlenecked unless you spend an even higher amount of area on it.
What's the middle-ground? You group the cores into clusters/CCXs and you either have a mesh-of-CCX or a ring-of-CCX or some other tiered structure. And you can break the "tile" idea down into tiers too - a tile is a ring or mesh of cores, and then you have a mesh of tiles, but these are separate logical tiers and don't need to interact.
It is the usual HPC networking problem - connecting 1,2, or 4 nodes is easy, with simple all-connected or hypercube topologies, with a small number of links. A hypercube requires only 2 links per node for 4 nodes. An all-connected topology requires only 3 links. And you can solve for modestly higher numbers with something like a ringbus (which gets a lot of flak but it's an extremely performant network structure, and AMD uses them too for their 8-core CCX). But that falls apart with higher numbers of nodes too, and big switched-fabric networking chips or backbone switches are some of the largest and most expensive chips manufactured, a 32-port 400gbe switch (idk, whatever) is gonna be a big beefy boy in itself, that type of thing often hits 750mm2+ of silicon on the latest nodes.
You need something that both scales in terms of network hardware/area/power, and also performs in terms of actual latency and throughput. That's super difficult (and the best topologies are de-facto "tiered" anyway like hypercube or butterfly), so the best strategy is to introduce this tiering. And AMD has meticulously stayed in the limit of "the IO die is a simple hypercube topology of quadrants" and "the CCX is a 4C all-connected or a 8C ringbus", and just composed these simple things together with tiering.
I think low-key Gracemont is important because it's Intel tinkering with the same concept - and they're doing mesh-of-tiles with sapphire rapids too (not sure what topology is inside each tile though). Because they can't have 14+ stops on the ringbus (memory controller, 12 cores, iGPU, etc) and the purist "mesh of single cores" topology obviously didn't work with skylake-SP.
Anyway, I wish they would do all-P-core too, and theoretically that exists, it's Sapphire Rapids, and there is a workstation/HEDT variant, it's just super expensive and has massive power transient problems (you need 500W of headroom, literally, you will crash if you don't have a 1kw+ PSU, they are not kidding about 1300W being the recommended) that might be sending them back to the drawing board for another stepping. And there is also all-e-core chips too, that's Sierra Forest... but it seems like a tentpole customer pulled out (rumored to be facebook iirc) because Bergamo, AMD's compact-core based Epyc server chip, is more attractive. And so they have reduced the scope of Sierra Forest, it now tops out at 2 of the medium chiplets and the big chiplets are canceled entirely (where they planned to use up to 4 of the big ones).
MLID is such an unreliable source that I hesitate to recommend him, I like his content and listen to him a lot, but you really need to understand the broader context of the market/etc to know whether what he's saying makes sense. But he does tend to have some interesting guests who are usually way better than he is, and one of his recent guests was a boutique PC builder who specializes in digital audio workstations (which need to be super low latency/etc). And they talk about Sapphire Rapids workstation and some of the things being discussed around it.
Yes. It is your life. It isn't about what is out there. It's about what's in you.
My kids are 19, 17, 12. I tell them- you're not going to college to get an education that is about knowledge out in the world. You are going to get an education about you. To learn about your person- your body, your brain, your own mental model of your self and other selves and the world.
Your person is still in physical growth mode until at least 25, and then you have lots of other changes and challenges coming after that. You will continue learning, including about your self, throughout the entirety of your life. To be set up to do that is why you're going.
(Yes, college is not the real world, in any way. But in important ways it is real enough.)
==
The most important things to be able to do are- build relationships, focus and concentrate, organize your self and your thinking, communicate, have fun, and take care of the physical self. You don't have any idea, really, how well you do those things as a teenager. It's the job of the adults around you to help. College is an opportunity to expose your person to more unique, distinct, varied, skilled adults and peers than at any time previous, and for some, more than they will ever get again (unfortunately). That exposure is the most intense learning the self can do.
For each of my kids, they have things they are good at now, and things they are not good at. Not just skills- capabilities. Biases. Potentials, not actuals. As their parent I have a good sense of possible distinct and unique trajectories for each of them given those potentials, and I do what I can to coach them onto those various trajectories and in specific work domain disciplines that are potential fits (to my eyes) for them. But that's a conversation that is specific to our relationship. And their lives are their own.
For you, I would encourage you to see yourself not even at the beginning of your adventure, and to think hard and figure out good ways, with the guidance of adults you currently respect and trust, to avail yourself and position yourself to be exposed to and learn from new adults worthy of respect and trust. And pay it forward, too.
If you're able to get hold of it, there's an Australian show called "The Gruen Transfer" (in later years, just "Gruen") which analyses advertising and similar in contexts akin to the Gruen Effect.
* You get schedules and deadlines that discourage procrastination, if you're the sort of person who leaves things until the last minute.
* You (hopefully) get detailed feedback on assignments from a real human.
* You get big manicured lawns and marble pillars.
* You get health insurance.
* You get in-person small-group discussions (at least in some subjects) of advanced topics.
* You get a bunch of people of your age and social background, all uprooted from home and looking to make new friends at the same time.
* You get subsidised gyms and sports clubs and interest clubs.
* You get to network and meet people.
* You get buildings full of academics with office hours where you can basically just walk in and they'll explain almost anything to you.
* You get parties full of drunk young people, of all genders, some looking for relationships, others for casual sex.
* You get access to journals and databases and software that usually costs $$$$$ (and computer labs with it all already set up)
* You get a weird 'future middle class' social status where you can drink and party and not get a job and go into debt - yet be treated as someone successful.
* You get a easily understood explanation for that gap in your resume.
* You get internship opportunities - where you can get your foot in the door at big employers, while being paid and taught how they do things.
* You get loans for your living costs, despite having no income or credit history.
* You get access to a library with more (serious, intellectual) books than you could read in a lifetime.
* You get to leave home, but with training wheels if you're not ready to cook and clean and laundry and pay bills all at once.
* You get supply control, from 'weed out classes' and limited numbers of student places.
* You get in-person exams that are at least moderately difficult to cheat on.
* And yes, you (hopefully) get a credential at the end of it. Maybe even with a good 'brand'.
Online courses can certainly deliver lectures at a much lower price than the ruinous cost of US universities. But I think part of the reason the likes of Coursera aren't on their way to replacing conventional colleges is because they're missing so much else from the bundle.
In Java you’d have the static initializers run before the main method starts. And in some languages that spreads to the imports which is usually where you get into these chicken and egg problems.
One of the solutions here is make the entry point small, and make 100% of bootstrapping explicit.
Which is to say: move everything into the main method.
I’ve seen that work. On the last project it got a little big, and I went in to straighten out some bits and reduce it. But at the end anyone could read for themselves the initialization sequence, without needing any esoteric knowledge.