> f you need one in zig you can go that approach or use a tagged union just that zig is open about what an interface is.
Perl objects worked like this and the ecosystem is a mess as a result. There's a built in `bless` to add a vtable. There's some cool stuff you can do with that like blessing arrays to do array of structs vs struct of arrays and making it look like a regular object in user code. The problem is there are like 4 popular object libraries that provide base stuff like inheritance, meta objects, getters/setters etc and they're not all compatible and they have subtle tradeoffs that get magnified when you have to pull multiple in to get serious work done.
It’s several factors and all of your alternatives are true to some degree:
1. An h20 is about 1.5 generations behind Blackwell. This chip looks closer to about 2 generations behind top end Blackwell chips. So ~5ish years behind is not as impressive especially since EUV is likely going to be a major obstacle to catching up which China has no capacity for
2. Nvidia continues to dominate on the software side. Amd chips have been competitive on paper for a while and have had limited uptake. Now Chinese government mandates could obviously correct this after substantial investment in the software stack — but this is probably several years behind.
3. China has poured trillions of dollars into its academic system and graduates more than 3x the number of electrical engineers the US does. The US immigration system has also been training Chinese students but having a much more limited work visa program has transferred a lot of knowledge back without even touching IP issues
They also have some insane power generation capability - doesn't seem that far fetched that they just build a shitload of slower chips and eat the costs of lower power efficiency.
In theory, but I'm not sure that's true in practice. There are plenty of mundane, non-groundbreaking tasks that will likely be done by those electrical engineers and the more people, the more space, the more tasks are to be done. And not to mention more engineers does not equal better engineers. And the types to work on these sorts of projects are going to be the best engineers, not the "okay" ones.
The more engineers you can sample from (in absolute number), the better (in absolute goodness, whatever that is) the top, say, 500 of them are going to be.
That's assuming top-tier engineers are a fixed percent of graduates. That's not true and has never been.
Does 5x the number of math graduates increase the number of people with ability like Terrance Tao? Or even meaningfully increase the number of top tier mathematicians? It really doesn't. Same with any other science or art. There is a human factor involved.
This is not necessarily true. Hypothetical, if most breakthroughs are coming from PHDs and they aren't making any PHDs, then that pool is not necessarily larger.
You just said what I said. I didn't say that 100% of the graduates are stupid, but certainly not all high tier either. We aren't in extreme need of the average electrical engineer or the average software engineer. That's a fact. Look at unemployment rates.
Doesn’t seem to work for India. Wuhan university alone probably has more impact than the sum of. Of course a competent state and strategic investment matters.
Depends on how you define quality. While medium and large format photography are extremely high resolution that’s not the only factor. Space age lenses were significantly lower resolution than the film. Modern mirrorless lenses are starting to come close to being able to out resolve film but still aren’t there. Meaning that you get more functional resolution out of modern digital. Digital also beats the pants off film for dynamic range and low light. That said the noise (grain) and dynamic range fall off in film are more pleasing than digital to most eyes. So it’s not all about technical specs.
> Digital also beats the pants off film for dynamic range and low light.
While this is true now, it took a surprisingly long time to get there. The dynamic range of professional medium format negative films is still respectable. Perhaps not so much in a low light, but it's very immune to overexposure.
Also, you can buy a cheap medium-format camera in a good condition and experience that "huge sensor" effect, but unfortunately there are no inexpensive 6x6 digital cameras.
It’s incredibly rare and specific, many people in the DB world don’t even know about them. 60x60mm sensor, larger than the actual film gate of 56x56mm There was also a version for the Rollei 6x6 6000 series, the Rollei Q16. I’ve only seen one for sale ever.
Technically larger than 6x6 film sensors have existed since the 80s or 90s at least but are typically only used for government things… Some digital aerial systems use huge sensors.
> Space age lenses were significantly lower resolution than the film.
Can you say a little more about this? Modern lenses boast about 7-elements or aspherics, but does that actually matter in prime lenses? You can get an achromat with two lenses and an apochromat with three. There have definitely been some advances in glass since the space program, like fluorite versus BK7, but I'm wholly in the dark on the nuances.
I find modern primes much sharper than their older counterparts not because of the elements or the optical design, but from the glass directly.
Sony's "run of the mill" F2/28 can take stunning pictures, for example. F1.8/55ZA is still from another world, but that thing is made to be sharp from the get go.
The same thing is also happening in corrective glasses too. My eye numbers are not changing, but the lenses I get are much higher resolution then the set I replace, every time. So that I forget that I'm wearing corrective glasses.
> I find modern primes much sharper than their older counterparts not because of the elements or the optical design, but from the glass directly
Even back in their prime, haha, the Cooke lens leaned into their glass manufacturing by calling it the Cooke Look. All of the things that gave it that look are things modern lenses would consider as issues to correct.
Actually, I'm pretty flexible when it comes to how lenses and systems behave. A lens with its unique flaws and look is equally valuable for me as a razor-sharp ultra high-fidelity lens.
All boils down what you want to achieve and what emotion you're trying to create with your photography. Film emulation has gone a long way, but emulating glass is not possible the same way (since you don't have any information about what happened to your photons in their way to your sensor), and lenses are important part of the equation, and will forever be, I think.
Actually, anything we use is "just tools". IDEs, programming languages, operating systems, text editors, fonts, etc.
We all prefer different toolsets due to our differing needs and preferences. Understanding it removes a lot of misunderstanding, anger and confusion from the environment.
But, reaching there requires experience, maturity and some insight. Tool suitability is real (you can't drive a screw with a pair of pliers), but the dialogue can be improved a ton with a sprinkle of empathy and understanding.
The lenses also have to be better to compensate for the smaller sensors. All lens defects get more "magnified" the smaller the sensor is. So a straight comparison isn't fair unless the sensor is the same size as the film was.
I wrote a longer post a few months ago.[1] The tl;dr is a) computer aided design and manufacturing b) aspherical elements c) fluorite glass d) retro focus wide angle designs and e) improved coatings. Mirrorless lenses also beat slr lenses because they are much closer to the film plane — of course rangefinders and other classic designs never had this problem to begin with.
1: https://news.ycombinator.com/item?id=42962652
Edit: this is just for prosumer style cameras. If you look at phone sized optics that’s a whole other ballgame.
> The SATA controller has been a bit flaky, but you can pick up 4-port SATA cards for about $10 each.
If your build allows the extra money for an LSI or real raid controller is well worth it. The no-name PCI-e sata cards are flakey and very slow. Putting an LSI in my NAS was a literal 10x performance boost, particularly with zfs which tends to have all of the drives active at once.
> If your build allows the extra money for an LSI or real raid controller is well worth it.
Keep in mind if you get a real RAID controller and want to use ZFS, you probably want to ensure it can present the disks to the OS as JBOD. Sometimes this requires flashing a non-RAID firmware.
That said, slightly older LSI cards are quite cheap on eBay, and the two I've bought have worked perfectly for many years.
I am curious about the slow part. I use these crappy SATA cards and I am sure they are crappy, but the drives are only going to give 100MB/s in bursts and they have an LVM cache (or ZFS stuff) on them to sustain more short-term writes.
I get if I was wiring up NVME drives that are going to go 500MB/s and higher all the time.
What I really care about with the SATA and what I mean by flaky is I when I have to reboot a system physically every day because the controller stays on in some way even if it gets a soft `reboot` command and then Linux fills up with IO timeouts because the controller seems to stop working after X amount of time.
Kodak managed the film and camera market about as well as they could. The mismanagement was a failure to diversify. The total digital camera market excluding cell phones, would be a fraction of Kodak's film business back in the film era. The film and camera story is a popular one but is fundamentally wrong. The shrinkage of the camera/film market was inevitable. You can look at Fujifilm who does sell cameras and basically owns the remaining film market with instax, however neither of those sustain the business they are effectively a chemical and medical manufacturer who dabbles in photography now.
Kodak on the other hand attempted to diversify to those markets in the 80s and 90s but made some terrible investments that they managed poorly. That forced them to leave those markets and double down on film just in time for the point and shoot boom of the 90s and the early digital market. Kodak was a heavy player in the digital camera market up to the cell phone era: they had the first dSLR and were the dSLR market for most of the 90s, they had the first commercially successful lines of digital point and shoots, they had the first full frame dSLR in the early 00s and jockeyed for positions 1-3 in the point and shoot market until the smart phone era. They continued to make CCD sensors for everyone during this time. Ya they missed the CMOS change over and smarthphone sensor market, but that was well after they were already in the drain.
I'm also a fast listener. I find audio quality is the main differentiator in my ability to listen quickly or not. A podcast recorded at high quality I can listen to at 3-4x (with silence trimmed) comfortably, the second someone calls in from their phone I'm getting every 4th word and often need to go down to 2x or less. Mumbly accents are also a driver of quality but not as much, then again I rarely have trouble understanding difficult accents IRL and almost never use subtitles on TV shows/youtube to better understand the speaker. Your mileage may vary.
I understand 4-6x speakers fairly well but don't enjoy listening at that pace. If I lose focus for a couple of seconds I effectively miss a paragraph of context and my brain can't fill in the missing details.
> I wonder if how much value there is in skipping LLVM in favor of having a JIT optimized linked in instead. For release builds it would get you a reasonable proxy if it optimized decently while still retaining better debugability.
Rust is in the process of building out the cranelift backend. Cranelift was originally built to be a JIT compiler. The hope is that this can become the debug build compiler.
I recently tried using cranelift on a monorepo with a bunch of crates, and it is nothing short of amazing. Nothing broke and workspace build time went from a minute and a half to a half of a second!
Cranelift is only intended for debug builds, there is nothing stopping you from using it for release builds but — to the best of my knowledge — you get noticeably degraded runtime performance if you go that way.
Everything fusion reactor design needs similar gains in some part of the stack outside of the fusion parts to make it a viable power source: tokamaks need magnets to be orders of magnitude better, the lining for the reactors needs to last for much longer, the whole steam conversion mess, etc.
Commercial REBCO tape is an entirely sufficient superconductor for tokamaks. At this point the limiting factor for the magnetic field is the structural strength of the reactor. Tokamak output scales with the square of size and the fourth power of magnetic field strength, and using REBCO, the CFS ARC design should get practical power output from a reactor much smaller than ITER.
About the size of JET. It's definitely practical in the sense that we can build it and it's likely to produce overall net power. Whether it will be competitive is another issue, and for that I agree with you that other designs, like Helion, have a better shot.
That's not a definition of "practical" that I would use. "Possible", perhaps, but practical implies effectiveness and suitability, and without competitiveness that isn't there.
Also shout out to Keynote which is the best presentation software. PowerPoint is so clunky in comparison. Nice features like making image backgrounds transparent are huge wins.
Pages is also pretty nice. Its definitely enough for home usage, and if my colleagues could read the pages files natively I would find it completely sufficient for professional use. I find it does layout much better than MS Office. Which honestly is a much bigger concern for home users: professional users will just switch to professional layout tools when they need it, but Sam doesn't need that cost/complexity for some bake sale fliers.
Numbers can also be nicer for home use cases, but is a bit weird if you're used to excel. And unlike pages or keynote quickly hits upper limits on complexity. I would never use numbers in a professional setting.
To add on to the sibling. Specialized models, including fine tuned ones, continually have their lunch eaten by general models within 3-6 months. This time round is mixture of experts that’ll do it, next year it’ll be something else. Tuned models are expensive to produce and are benchmark kings but less do less well in the real world qualitative experience. The juice just ain’t worth the squeeze most of the time.
Meta does have some specialized models though, llamaguard was released for llama 2 and 3.
The expensive part is building the dataset, training itself isn't too expensive (you can even fine-tune small models on free colab instances!), and when you have your dataset, you can just fine tune the next generalist model as soon as it's released and you're good to go ago.
Perl objects worked like this and the ecosystem is a mess as a result. There's a built in `bless` to add a vtable. There's some cool stuff you can do with that like blessing arrays to do array of structs vs struct of arrays and making it look like a regular object in user code. The problem is there are like 4 popular object libraries that provide base stuff like inheritance, meta objects, getters/setters etc and they're not all compatible and they have subtle tradeoffs that get magnified when you have to pull multiple in to get serious work done.
reply