Hacker Newsnew | past | comments | ask | show | jobs | submit | joseph_grobbles's commentslogin

This is a really terribly written article. Sorry for the negativity, but it was tough to try to dig through.

For those confused by the title, Intel released a Xeon that includes 64GB of high speed RAM on the chip itself, configurable as either primary or pooled memory, or a memory subsystem caching layer.


I also found it hard to read; I wanted to know how one might use this thing, but instead I learned all about channels, "wings", backplanes, and saw a lot of tables and photos that seem to be largely duplicates. An entire page (out of 5 pages) was dedicated to examining the development system.


I suspect it was the video transcript or a lightly edited version of the transcript.


I came here to say the same thing; usually StH is very readable. This has to either be a video transcript or some AI pruning (maybe a combination) because it reads absolutely horribly.


Just as gaming hardware has taken over compute, gaming journalism must take over reporting!


It's a variation of Parkinson's Law -- we just keep expanding what we are doing to fit the hardware available to us, then claiming that nothing has changed.

CI is a fairly new thing. The idea of constantly doing all that compute work again and again was unfathomable not that long ago for most teams. We layer on and load in loads of ancillary processes, checks, lints, and so on, because we have the headroom. And then we reminisce about the days when we did a bi-monthly build on a "build box", forgetting how minimalist it actually was.


This is true, but there's still choice in how we expand and the default seems to be to do it as wastefully as possible.


As wastefully as we can get away with no?

Not the same thing.


Yes, your wording is better, I don't believe people are actively trying to be as wasteful as possible.


It's what the economy rewards. Simple as that


I don't totally agree. It's what a short-term view of the economy rewards for sure. But even if that was the only view of the economy I've seen plenty of low-performance software written purely out of cargo culting and/or inability or lack of will to do anything better.


Google's first TPU was developed a year after Tensorflow. And for that matter, Tensorflow works fine with CUDA, was originally entirely built for CUDA, and it's super weird the way it's being referenced in here.

Tensorflow lost out to Pytorch because the former is grossly complex for the same tasks, with a mountain of dependencies, as is the norm for Google projects. Using it was such a ridiculous pain compared to Pytorch.

And anyone can use a mythical TPU right now on the Google Cloud. It isn't magical, and is kind of junky compared to an H100, for instance. I mean...Google's recent AI supercomputer offerings are built around nvidia hardware.

CUDA keeps winning because everyone else has done a horrendous job competing. AMD, for instance, had the rather horrible ROCm, and then they decided that they would gate their APIs to only their "business" offerings while nvidia was happy letting it work on almost anything.


Best explanation so far. I am surprised OpenCL never gained much traction. Any idea why?


The same reason most of AMD's 'open' initiatives don't gain traction: they throw it out there and hope things will magically work out and that a/the community will embrace it as the standard. It takes more work than that. What AMD historically hasn't done is the real grunge work of addressing the limitations of their products/APIs and continuing to invest in them long term. See how the OpenCL (written by AMD) Cycles renderer for Blender worked out, for example.

Something AMD doesn't seem to understand/accept is that since they are consistently lagging nVidia on both the hardware and software front, nVidia can get away with some things AMD can't. Everyone hates nVidia for it, but unless/until AMD wises up they're going to keep losing.


what did you do to get all your posts automatically dead?


My Brother multifunction has been without CYM inks for literally years, which was fine as I printed in only B&W. A recent system update has suddenly made it refuse to print anything, including B&W, because one or more color inks are empty, despite the black being 80% full.

Brother saw the money on the table and have decided to move towards the dark side.


It's some tiny Twitter (blue) user posting some nebulous list. Seems pretty BS to me, and it's surprising HN falls for this. Okay, it actually isn't surprising.

It's actually funny how a confirmation bias works: In other posts people say "this tracks with other rumors..."...yeah, that's what people who make stuff up tend to do.


Such considerations must incorporate public safety/crime considerations as well. Smartphones are often the most expensive thing we have on our person and became a huge target for thieves. Locking/bricking went a long way towards reducing this, and then limiting the value of resale parts did again.


Honestly I wish they would serial lock every single part of the phone possible. And then unlock them when you detach your apple account from it. Put up a message that says "This display is owned by another apple account" and refuse to function until it's removed or you contact the owner to have it unlinked.

Unless Apple is dealing with thefts at the factories before they make it in to a phone.


Technically they could do this already, it could just cause some bad publicity when people who got their phone 3rd party repaired wake up with a message that their screen is stolen.


Do it for new part swaps only. And make it clear in the message that you can use the part, you just have to unlink it from the owning account.


We need systems, processes, and technologies which ensures the authenticity (provenance) of a supply chain.

Art, food, electronics, materials, clothing, malware, root kits, ad nauseam.

I've been reading about counterfeit, black market, and gray market goods for decades. Do not want.

If I pay $1,000 for a Gucci handbag, I want an authentic $1,000 Gucci handbag. (I have zero issue with knockoffs clearly labeled as knockoffs.)

Anti-consumer, anti-labor, anti-customer, anti-fairuse and pro-monopoly bullshit regiments like DRM, DMCA, inability to repair, and price gouging are orthogonal issues. We can have provenance without these shackles if we choose to reign in corporate power.

As for Apple in particular, they're not the worst, and have been getting better. Their phones and laptops are the most reliable and are becoming easier to repair (design and logistics). The terms of their Apple Care have gotten more generous (forgiving).

Spitballing, I'd say Apple is ~1/3rd of the way towards a healthy cradle-to-grave product lifecycle. They can and should do much better with 3rd party repairs. Like making authentic parts available at cost. Certifying repairs shops. Certifying technicians. Etc.

Source: I was a tech at an Apple Dealer as a kid. Our leads were trained and certified. Our parts were all authentic. My notions are based on experience, not some utopian fantasy.


If the device becomes unusable because it is marked as stolen, disassembling it for parts should not be economic.

One way to do this is spend MORE money to make sure every single part of the device is waste and MUST go to a landfill.

Another way would be to do the opposite, and make the spare-parts readily available for everyone to make the sum of parts less valuable. The mainboard is already unusable because it's flagged as stolen, the rest of the parts should not be worth more than 60-70USD. But because some of the parts cannot be purchased at all, they are currently worth alot more


Not only that but ensuring genuine parts also goes a long way against hardware-based attacks.


That’s not a good reason to do this. If you really wanted to do it that much you could just modify the real parts and that isn’t really a thing that happens anyway. What’s a far bigger concern is giving one company complete control over whether you can repair your own phone or not, creating a monopoly where they can charge whatever they want, and they might not even do it at all because they’d prefer you to buy a new one. This is actually happening to the average person, these theoretical attacks that serialisation doesn’t really protect against aren’t


> If you really wanted to do it that much you could just modify the real parts

It seems like the phone would alert for any swapped part, no matter genuine or not. Maybe this is why. Makes sense to me now.


I don’t think that’s why, I’ve never heard of that ever happening


Maybe you don't hear about it because it works and so no one attempts it

By the way long ago I got a screen repaired at an unofficial place, on an old iphone, and camera started working incorrectly (focusing issues etc). I kinda suspect they swapped the camera for a different one, if the phone warned me immediately that would have been cool. I heard many similar stories by the way.


Correct.


>There's nothing in the article suggesting they've changed their strategy with TPUs in any way

Google owns and designs their own TPUs. They offer these TPUs in the cloud. I've seen many comments in here about how next-level TPUs are (despite zero evidence indicating that). Google even disclaims their TPU by saying that you shouldn't compare it with the H100 given node levels et al.

Their premiere offering is an nvidia H100 offering.

Yes, of course this is a pretty telling indication. If Google was all in on TPUs they'd be building mega TPU systems and pushing those. Instead they're pushing nvidia AI offerings.


Horribly cynical, and I can't imagine having that viewpoint. It's actually fascinating how the author first justifies that sort of malaise, and then does the "but of course not me" thing.

Even if someone were that self-focused, in almost any group or organization, critical security vulnerabilities and significant costs do hurt everyone in the group. You're going to be the ones having the rough time when expenses exceed value, and when major embarrassments happen. There is no insulation from it.


I'm confused as to how you felt the author is justifying it:

> Now, I would personally feel shame if I did these things.

They seem more interested in trying to find a solution to it. Or just posing it as a legitimate problem, the solution to which is food for thought.


I once was brought into a team that fervently bought into the "hotspot" argument, blustering ahead under the notion that performance was tomorrow's problem where someone would spend a day with a profiler and it would all be fixed.

In reality their project was death by a thousand...neigh million or billions...of cuts. Poor technology choices. Poor algorithm choices. Incompetent usage (e.g. terrible LINQ usage everywhere, constantly). This was the sort of project where profiling was almost impossible because any profiling tool barfed up and gave up at every tier.

Profiling the database was an exercise in futility. Profiling the middle tier was a flame graph that was endless peaks. Profiling the front-end literally crashed the browser. I ended up having to modify Chromium source to be able to accurately get a bead on how disastrously the Angular app was built.

This is common. If performance doesn't matter to a team, it will never be something that can be easily fixed. Maybe you can throw a huge amount of money at the problem and scale up and out to a ridiculous degree for a tiny user base, but making an inefficient platform efficient is seldom easy.


>This doesn't really solve a problem anyone had that fixed location Starlink and Rogers' LTE network didn't solve already

This is for locations with no connectivity, where SpaceX satellites become very low-bandwidth Rogers towers for critical situations. Your examples are not relevant.

The rest of your post is just knee-jerk opposition luddism that doesn't apply at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: