Hacker Newsnew | past | comments | ask | show | jobs | submit | axoltl's commentslogin

I do vulnerability research. Those things would do the exact opposite of what you're aiming for. They'd be received with glee by mercenary spyware companies, _especially_ being able to load things into higher levels of privilege.


that wouldn't be a problem, apple signs extensions. In windows land for example, there are ELAM drivers for security software, they don't just hand them out, you basically have to convince people at Microsoft you're one of the good guys, in person.


It means more surface (both from extensions themselves and the loader code), relaxation of things like KTRR/CTRR (you now need to add executable EL1 pages at runtime), plus the potential for signing keys to leak (Finding enterprise signing keys even for iOS is fairly easy).

As far as Windows goes, https://www.loldrivers.io is a thing.


Yeah, loldrivers are a thing because any signed driver can load, vuln drivers with ELAM .. I don't know of any, I believe they're quite rare.

You have a good point with attack surface, but apple has a pretty robust system already for ensuring boot and lock security that doesn't rely on EL0/El1 security. I'm sure you know more than me about higher EL's like EL3 and secure world code that can take care of all that. I'm pretty sure they don't have to issue new signing keys either, matter of fact, why let even 3rd parties do this, apple themselves could expose a memory and file system dumping api without involving third parties. That way, they could sanitize away anything they consider sensitive as well. They can also require that the commands be issued over a physical/authorized usb connection.

Point is, there are very legitimate are critical cases where memory and file system forensics could be critical. From what little chatter I've heard, forensic software today is resorting to exploitation of the devices and those exploits tend to be abused for other reasons too.


Trusted high-privilege components, whether first or third party, are targeted for exploitation.


Do you know of any reports where macos system extensions being abused this way? I've heard about windows drivers, but my impression was apple is doing this well enough to be a non-issue mostly?


e.g. zero day CVE-2024-44243, patched last year, https://www.microsoft.com/en-us/security/blog/2025/01/13/ana...


That's a good one. To be clear, I'm not saying vulnerabilities don't or can't exist in system-extensions. I'm just saying that apple can publish and/or sign iphone extensions for a very limited use case like this, or publish an api/system service to do the same thing, if the concern is 3rd parties. The use case here is reading some memory and exposing that to authorized applications. I concede on the system extension part, but apple can still expose the capability without one.


Crowdstrike showed us how good idea that was.


Crowdstrike has system extensions on macos.


You're confusing your opinion of the company with the perception by the general public. Apple's definitely not perceived as 'an office appliance company' by your average person. It's considered a high-end luxury brand by many[1].

1: https://www.researchgate.net/publication/361238549_Consumer_...


I think you mean high-tech brand, which the linked article affirms.


It'd run on a 5090 with 32GB of VRAM at fp8 quantization which is generally a very acceptable size/quality trade-off. (I run GLM-4.5-Air at 3b quantization!) The transformer architecture also lends itself quite well to having different layers of the model running in different places, so you can 'shard' the model across different compute nodes.


From what I've been reading the inference workload tends to ebb and flow throughout the day with much lower loads overnight than at for example 10AM PT/1PM ET. I understand companies fill that gap with training (because an idle GPU costs the most).

So for data centers, training is just as important as inference.


> So for data centers, training is just as important as inference.

Sure, and I’m not saying buying Nvidia is a bad bet. It’s the most flexible and mature hardware out there, and the huge installed base also means you know future innovations will align with this hardware. But it’s not primarily a CUDA thing or even a software thing. The Nvidia moat is much broader than just CUDA.


I believe they mean the source region's tag, rather than the destination.


Not sure if I understand this correctly:

If an attacker somehow gains out-of-bounds write capability for a tagged memory region (via a pointer that points to that region, I assume), they could potentially write into a non-tagged memory region. Since the destination region is untagged, there would be no tag check against the pointer’s tag, effectively bypassing EMTE.

> I believe they mean the source region's tag, rather than the destination.

But in the previous case, the pointer the attacker uses should already carry the source region’s tag, so it’s still unclear if this is what they meant.

I’m not sure which attack scenario they had in mind when they said this. It would help if they provided a concrete attack example.


I have some inside knowledge here. KPP was released around the time KTRR on A11 was implemented to have some small amount of parity on <A11 SoCs. I vaguely remember the edict came down from high that such a parity should exist, and it was implemented in the best way they could within a certain time constraint. They never did that again.


Concerta is extended release methylphenidate. It is not an amphetamine.


both are stimulants though


So are caffeine and nicotine, I'm not sure what your point is.


This isn't a transformer, it's a diffusion model. You can't split diffusion models across compute nodes.


Just from my limited experience:

Barton Springs in Austin is always brimming with people and Shiner Bock makes a frequent appearance.

Dolores Park in SF never has a dull moment and you can buy shrooms or edibles from vendors walking around.

Golden Gate Park in SF is massive and there are tons of clusters of people socializing and drinking throughout the park (especially near the Conservatory of Flowers!)

Central Park in NY in many ways mirrors Golden Gate Park only its way busier. Good luck finding a spot near the south side of the park on a sunny day. You might spot a mimosa or two, three…


Austin, SF, NYC

You are talking about 3 of the trendiest places in the United States.

They are anomalies, not the norm.


Something is lost as well if you do 'research' by just asking an LLM. On the path to finding your answer in the encyclopedia or academic papers, etc. you discover so many things you weren't specifically looking for. Even if you don't fully absorb everything there's a good chance the memory will be triggered later when needed: "Didn't I read about this somewhere?".


Yep, this is why I just don’t enjoy or get much value from exploring new topics with LLMs. Living in the Reddit factoid/listicle/TikTok explainer internet age my goal for years (going back well before ChatGPT hit the scene) has been to seek out high quality literature or academic papers for the subjects I’m interested in.

I find it so much more intellectually stimulating then most of what I find online. Reading e.g. a 600 page book about some specific historical event gives me so much more perspective and exposure to different aspects I never would have thought to ask about on my own, or would have been elided when clipped into a few sentence summary.

I have gotten some value out of asking for book recommendations from LLMs, mostly as a starting point I can use to prune a list of 10 books down into a 2 or 3 after doing some of my research on each suggestion. But talking to a chatbot to learn about a subject just doesn’t do anything for me for anything deeper than basic Q&A where I simply need a (hopefully) correct answer and nothing more.


LLMs hallucinate too much and too frequently for me to put any trust in their (in)ability to help with research.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: