Hacker News new | past | comments | ask | show | jobs | submit login
iLeakage: Browser-Based Timerless Speculative Execution Attacks on Apple Devices (ileakage.com)
611 points by aw1621107 on Oct 25, 2023 | hide | past | favorite | 225 comments



Things I'm missing from this FAQ:

- Is this a Webkit vulnerability or a Safari vulnerability?

- Does enabling Lockdown mode mitigate this vulnerability, seeing as mobile Safari doesn't expose these dev settings?

- What's the timeline on the disclosure to Apple?

Edit: they updated the page to answer the last question:

  When did you notify Apple?

  We disclosed our results to Apple on September 12, 2022 (408 days before public release).


> Is this a Webkit vulnerability or a Safari vulnerability?

This is technically a Hacker News Favorite Processor a.k.a. M1/M2 vulnerability. However all relevant CPUs on the market now has the same vulnerabilities so it became a feature so software has to be designed to mitigate it.

It is impractical to get rid of all possible Spectre gadgets from WebKit, so the browser should be designed to leverage OS's Spectre mitigation to deal with these vulnerabilities (i.e. isolate different websites in different processes).

And, in the FAQ:

> Ultimately, we achieve a out-of-bounds read anywhere in the address space of Safari's rendering process.

So, in my opinion, this is a Safari vulnerability: they hold Site Isolation wrong.


>However all relevant CPUs on the market now has the same vulnerabilities so it became a feature so software has to be designed to mitigate it.

I don't understand what you mean when you say the vulnerabilies to speculative exucution side channel attacks is a CPU "feature." Could you expound on your meaning there?

In my mind, a CPU feature would be something like out-of-order execution or integrated memory controllers. Isn't this more of a side effect (or an "oh shit..."). For example, I'd consider speculative execution to be a feature of CPUs with a side effect causing these vulnerabilities to side channel attacks, much like weight loss is a feature of the medicine Xenical (orlistat) with a side effect causing anal leakage.

Also, mitigations for newly discovered spectre-like attacks have to be done software side, but successive processor generations will bake those mitigations into the silicon to reduce the performance penalties associated with software fixes.


> mitigations for newly discovered spectre-like attacks have to be done software side, but successive processor generations will bake those mitigations into the silicon to reduce the performance penalties associated with software fixes.

This is not correct. Spectre-like bug in the same privilege domain (as to the CPU) is just unfixable. Meltdown, L1TF etc were fixed in hardware because they leak data across hardware enforced privilege domains (i.e. userspace/kernel/enclave), in this sense it is a hardware bug.

In case of browser, the CPU feature is "CPU is eager to leak arbitrary data in the same address space even if on the face the code does not seem to do it". This can't be fixed in the hardware because it's not a hardware bug: there's no hardware implemented security boundary, it's purely software defined.


>> Ultimately, we achieve a out-of-bounds read anywhere in the address space of Safari's rendering process.

> So, in my opinion, this is a Safari vulnerability: they hold Site Isolation wrong.

Previously, something I would normally consider a Safari specific bug (IndexedDB storage not being isolated to its owning web page) also made it into Gnome Web and various other WebKit browsers. Site Isolation is enabled on other browsers but as an outsider I have no idea if that is normally handled by WebKit or by the surrounding framework.

It looks like Gnome Web/Epiphany is safe at least: https://gitlab.gnome.org/GNOME/epiphany/-/merge_requests/448 but there are plenty of WebKit implementations out there, such as cars and video game consoles.


The merge request you linked is for cross-site navigation, which is a different feature flag than the cross-site window open recommended by this paper.


Hmm, unfortunate. It was tagged as "site isolation" in https://gitlab.gnome.org/GNOME/epiphany/-/blob/master/NEWS?r... so I thought it was the same feature.


This is a WebKit vulnerability in the face of surprising microarchitectural behavior. For one, they put a website into the address space of another, which is a bug under Spectre. Second, some of their code is correct under a traditional model but does not take speculative attacks into account, and needs to be updated.


I'm not familiar with WebKit, but I consider Site Isolation a Chromium feature instead of a Blink one. Is it supposed to be solved within the render engine?


Yes, typically the web engine will handle process isolation. While there are exceptions the API for these typically look more like "open a website for me in this window" and not "please spin up my GPU process for this tab".


They have an answer to your last question in the FAQ

  When did you notify Apple?
  We disclosed our results to Apple on September 12, 2022 (408 days before public release).


408 days and it still requires a user to enable via a debug menu? This is not a good security look from apple.


I don’t know why they make it an Apple problem. They claim their attack works on all browsers, and it doesn’t look like it’s fixed on any of them.


The only reason why attack works on all browsers is because Apple force all browsers on their platform to use Webkit. This is why it's Apple problem.


That’s only true for the iOS related platforms. Nice old MacOS still lets you run any browser you want, including one you’ve developed yourself if you so wish.


It works on webkit. Firefox isn't affected on MacOS, but it is on iOS because apple forces the use of webkit there.


It only affects Apple processors. From the FAQ:

Yes (with a very high chance), if you have a device running macOS or iOS with Apple's A-series or M-series CPUs. This includes all recent iPhones and iPads, as well as Apple's laptops and desktops from 2020 and onwards.


They added this FAQ point a couple minutes ago, that's why the existing comments didn't see it.


That wasn't on the page when I read it.

Good that they've added it. Disappointing to see that Apple has gone so long without releasing a fix.


This appears to be an architectural vulnerability where a speculative execution side channel similar to Spectre can be utilized within Safari or any other browser. The specifics of which environment is exploitable comes down to the specifics of the JavaScript-based gadget they use to trigger/measure this side channel. It may be in the linked paper which I haven’t read yet.


These questions are answered in the FAQ now:

> - Is this a Webkit vulnerability or a Safari vulnerability?

Safari.

"...iLeakage exploits idiosyncrasies in Safari's JavaScript engine..."

> - Does enabling Lockdown mode mitigate this vulnerability, seeing as mobile Safari doesn't expose these dev settings?

Yes.

"Lockdown Mode does mitigate our work by disabling just-in-time (JIT) compilation in Safari, which is a performance feature used by iLeakage to build its attack primitives."


That's good to know. Glad WebKit isn't inherently broken.


> Am I at risk if I use a credential manager?

> Not for the most part. In fact, we encourage using credential managers as opposed to trying to remember all of your passwords. In general, this is a better approach than reusing passwords or storing them insecurely. While iLeakage can recover credentials that are autofilled into a webpage, we note that many platforms require user interaction for autofill to occur.

Why would use of a credential manager change this? If its leaking something out of memory it should effect all memory within the Safari process space? I'm not familiar enough in this area to understand this caveat.


They're basically saying: a) you are safer with a password manager than without, in general, despite it not providing any additional security for this particular attack; it is also not any more vulnerable. having a password manager doesn't make you more likely to be caught by this attack (because the last thing they want to do is accidentally convince people to stop using them, in favor of 'newpassword2')

b) you are safer yet if you turn off automatic autofill and instead use a hotkey or some other form of user interaction


One of their demos was showing how they could recover a username/password combination for a third-party site (Instagram), which was specifically possible because a password manager was in use that auto-filled those fields, putting them in memory. One possible reading of that is that you'd be immune if you didn't use a password manager, didn't let your browser remember the password, and just typed it in each time. There's a bunch of reasons that's a dumb objection:

- Having a password manager is good for lots of other reasons, and at least means that only one website is compromised if the password is stolen

- Their technique could probably also steal session tokens, which isn't quite as bad as stealing a password but is still bad.

- Password managers can be configured to require a click to fill in the password, which also defeats this attack.


I think they’re saying that you’re not more vulnerable with a password manager than you would be without one. I.e. they can recover passwords that have been autofilled into a page, just as if you entered the password manually, but they can’t read all your stored passwords directly out of your password manager.


That doesn't jive to me with my read of their text.


FWIW I read it the same as the others.


My understanding is that this the vulnerability only allows memory access to related Safari/Webkit processes (specifically those sites that were opened with a window.open call). So passwords stored in a separate password manager app are inaccessible unless that app autofills the password into the compromised Safari window/process.


>Why would use of a credential manager change this?

Change what?

>If its leaking something out of memory it should effect all memory within the Safari process space?

AFAIU, Safari generally puts different origins and extensions in different address spaces, so it's not vulnerable to speculative execution attacks. This attack found a way to make 2 different origins share the same address space. I'm assuming the attack doesn't apply to extensions. From the paper:

>We begin by abusing Safari’s site isolation policy, demonstrating a new technique that allows the attacker page to share the address space with arbitrary victim pages, simply by opening them using the JavaScript window.open API.


Please correct me, if I'm misinterpreting this, but is the framing regarding Webkit-only actually correct? The cache exploit seems to be a general one:

> Here, we show that our attacks have near perfect accuracy across Safari, Firefox, and Tor.

Moreover, is the attack via `window.open()` really specific to Webkit, or is Webkit just the only engine that was studied in depth for this study? Notably, `window.open()` implies a shared context between the calling window, which receives a reference to the newly created window, and the new window, which has a back reference via `window.opener`. Do other browser engines achieve perfect isolation?


Chromium and Firefox have implemented site isolation on their desktop browsers, so pages that are not same site should never be loaded in the same process. On mobile browsers, Chromium's site isolation is limited, and Firefox has not finished implementing it.

https://www.chromium.org/Home/chromium-security/site-isolati...


Notably, the paper suggests that Webkit is even more hardened:

> Taking this approach a step further, Safari follows a simple one process per tab model, where two web- pages are never consolidated into the same rendering process, even under high memory pressure and even if they share an eTLD+1 in their URLs. Instead, Safari spawns a new rendering process for each tab until the system runs out of memory.

It's only in the context of `window.open()` that this isolation strategy is defeated. Given that both the calling window and the newly opened window share mutual references to each other, isn't the next side channel attack lurking around the corner, even if these windows are rendered by separate processes?


The paper does imply that, but I would disagree that it is more hardened. I would guess that this strict process per tab model is Safari's attempt to get some degree of isolation despite not having true site isolation.

> Given that both the calling window and the newly opened window share mutual references to each other, isn't the next side channel attack lurking around the corner, even if these windows are rendered by separate processes?

Non-same-origin opener references only allow very restricted operations. It is possible that there are undiscovered issues, but it is a lot less powerful than running in the same process. It isn't like having a raw pointer from one window to another.


Hum, given that the Apple Silicon processors use unified memory and a general cache (and that some of their performance has been attributed to what was then deemed an innovative memory and cache architecture) and that this is in principle a cache exploit, there might be still issues…


The cache on Apple's processors are fairly standard (but implemented well)


Maybe, maybe not. Spectre-type attacks (of which this is one) rely on a shared address space (in other words: the same process). Meltdown can cross address spaces but the authors do not know of any such attacks on modern Apple processors.


I'm confused. This seems like a high severity issue, but the fix is behind a debug menu? Why has this been made public before a fix has been properly rolled out everywhere?

This also goes to show how the side channel mitigations are totally useless and we should stop pretending such attacks have been fixed. It is not safe to run untrusted code, no matter how you sandbox it. Not on a host using a modern CPU running multiple applications.


> Why has this been made public before a fix has been properly rolled out everywhere?

It looks like apple has had over 400 days to fix this. That's already more time than I'd have given them. It's far better that apple users are told what their risks are than to just leave apple users ignorant of the danger for another year+ by not saying anything while praying that nobody else is already aware of the flaw and quietly exploiting this.

I'm all for giving companies time to fix their bugs, but at a certain point it becomes more irresponsible to not inform the people impacted.


I'm as confused about the mitigation status as others. However, the paper is clearer than the website:

> That is, while the speculative JavaScript sandbox escape is still possible, an attacker becomes limited to reading their own address space and therefore their own data. Finally, at the time of writing, Apple’s patch is publicly available [57] and is implemented in Safari Technology Preview versions 173 and newer [58].

[57] is https://github.com/WebKit/WebKit/pull/10169. At the bottom of it, an engineer continues to link ongoing hardening patches for window.open() + process isolation.


For standard Safari (not technology preview) 17.0 on Ventura "Swap Processes on Cross-Site Window Open" is already enabled (and labeled Stable) for me. I did not enable it so it's possible Apple already enabled this and their description of how to enable etc. is out of date.

Edit: The below comment is correct, I was looking at a similarly named item "Swap Processes on Cross-Site Navigation" - it appears this is not enabled by default right now.


I also checked on my iPhone which I just upgraded to 17.1 and the flag is present and already enabled there as well.

Edit: Seems like I got confused by the other very similar feature flag as well.


Also enabled by default for me on macOS 14.1.

Edit: It appears I confused it with a similarly named flag. It doesn't appear to be enabled on macOS yet by default.


They disclosed this vulnerability today but it appears the information on the vuln scare site is outdated. Bit sloppy.


I think users here, including myself, got confused because there is a feature flag that has a similar name.

"Swap Processes on Cross-Site Navigation" is enabled by default, but this paper recommends "Swap Processes on Cross-Site Window Open".


> Why has this been made public before a fix has been properly rolled out everywhere?

Because the last bastion of privacy and security have sat 400+ days on this without fixing it. I wouldn't blame the finders for making it public before the fix is rolled out, rather Apple for sitting on their hands for so long.


> This seems like a high severity issue, but the fix is behind a debug menu?

Presumably because fixing this required changing the processes model in WebKit, which means they are still testing the change before landing it for everyone.

> This also goes to show how the side channel mitigations are totally useless and we should stop pretending such attacks have been fixed. It is not safe to run untrusted code, no matter how you sandbox it.

All side-channel attacks have not been fixed, but I think it is inappropriate to call side channel mitigations useless. They significantly raise the bar for exploitation, much like seatbelts aren't "useless" even though people still die in car crashes. And I don't know about you, but I still drive cars even though I'd love to find ways to improve their safety.


It's unclear if they even reported this issue to Apple.


They reported it to Apple a long time ago, over a year I think. The problem appears to be that it's very hard to mitigate.

ETA: To be clear, this isn't me speculating. I spoke with one of the authors and asked this specific question. Apple hadn't shared the mitigation with them as of a few days ago.


Thank you. Do you know if lockdown mode / disabling JIT would mitigate the issue?


From what I’m told disabling JIT might do the job. With consequences to things breaking down.


Disabling JIT would make it substantially harder to exploit.


From the site (perhaps it was updated to include this?):

> We disclosed our results to Apple on September 12, 2022 (408 days before public release).


Interestingly, the YouTube videos are 13 months old.


That's when they reported it to apple.


It's been reported

>At the time of public release, Apple has implemented a mitigation for iLeakage in Safari

However the site gives no details on timelines or a report at all.

(edit) They have added a "When did you notify Apple?" to the FAQ.


Look at the faq


See edit.


> We disclosed our results to Apple on September 12, 2022 (408 days before public release).

Really interested to find out why Apple has (mostly) slept on this for over a year!


Because they can.

They are one of the CVE gods, so they can veto issuance of CVEs against their products. That kind of power means you can move as slowly as you please.


BS. Being a CVE numbering authority (of which there are several hundred) does not grant a veto against CVE issuance. They are allowed to issue CVEs on their products but by no means are they the only authority that may issue them for Apple vulnerabilities.


Also, you don't need to issue a CVE to publish a vulnerability. You just make it public regardless and say CVE was denied for it.


I thought I had seen a mention of a fix on the ileakage website and then it dissapeared. I almost thought I imagined the whole thing, but actually they have been making changes to the website only in the past hour.

> "To mitigate our work, Apple has just released iOS 17.1, iPadOS 17.1, and macOS Sonoma 14.1. Update your devices now!"

Which they have now reverted.

https://github.com/ileakage-authors/ileakage-authors.github....


I think one of the authors of the paper are reading this thread because when I pointed out that there was no timeline on the site, they added a brief section to the FAQ. There is also some confusion here about whether a fix has been rolled out so I think they've also become confused.


Strange. I’ve updated to iOS 17.1 and under Settings -> Safari -> Advanced -> Feature Flags, the `Swap Processes on Cross-Site Navigation` is there and is enabled by default. I wonder if it’s different from `Swap Processes on Cross-Site Window Open` on macOS.


It is different. The cross-site navigation flag is a couple of years old. It was enabled by default for iOS in November 2018 for example https://github.com/WebKit/WebKit/commit/e191fc8c412850cb9fd0...


also on by default on 17.0.3. i definitely did not change this.


"We note that iLeakage is a significantly difficult attack to orchestrate end-to-end, and requires advanced knowledge of browser-based side-channel attacks and Safari's implementation" - possibly the reason Apple is not losing sleep over this?


So difficult to orchestrate that it's unfeasible, so there's effectively 0% risk involved.

Yet another scary nickname and domain name for it says unprofessional and untrustworthy to me.


It's been shown possible, so: 1. There's a number of well funded agencies which would be happy to abuse it. They have both the skills and time to do it. 2. Showing that Spectre is possible on Intel created a steady stream of similar attacks. Other groups are likely already looking into it and may come up with much more feasible options.


The security industry runs on fear, so that's not so surprising.

IMHO if you're being targeted by a nation-state actor, or otherwise someone who knows enough about your hardware and software environment to be able to do something like this, there are far more important things to worry about.

All these side-channels require a lot of setup and can be easily perturbed by other unpredictable sources of noise in the environment.


Eh, If an average security programmer redneck like myself can abuse spectre and meltdown, I am pretty sure that some of the malware/ransomware can, they are not state sponsored but do have a financial incentive to do so.

An attack vector is available, they'll abuse it if they can. Do you think I'm overestimating them ?


The sophistication of the authors in their paper speaks to the opposite. Give it a read!


> if you have a device running macOS or iOS with Apple's A-series or M-series CPUs. This includes all recent iPhones and iPads, as well as Apple's laptops and desktops from 2020 and onwards.

as a rare Intel Mac owner, I guess I am not affected then


You're probably affected; the cache hierarchy would just be a bit different and require some changes to the proof of concept.


If you're on MacOS, you can simply not use Safari. If you're on iOS, you have to use lockdown mode, which is the only safe way to use an iPhone. Any benchmarks done without lockdown mode should be considered as useful as CPU benchmarks run with mitigations=off.


That's absolutely not fair. Lockdown mode isn't the default and should only provide defense in depth. Devices running the default configuration are absolutely expected to be secure.


> Devices running the default configuration are absolutely expected to be secure.

That might be the customer's expectation, but that isn't what Apple is providing. We've time and again seen that the default configuration is not secure. Apple has known about this bug for more than a year now, and the only protection remains to use lockdown mode.


Nothing in the world is 100% secure; there are tradeoffs. Lockdown mode is more secure (but certainly not 100% secure) at the expense of less usable and less performant.

Demanding absolute security on the default configuration is unreasonable. No platform provides that.


> Demanding absolute security on the default configuration is unreasonable. No platform provides that.

That's not what we're expecting. Most consumer platforms provide fixes for known secret-stealing vulnerabilities in the order of weeks. What we're seeing here is an outlier.


Yeah, no, this is absolutely not true. Every competitor has similar issues in deploying fixes: better in some places, worse in others.


More than a year to deploy a fix for a secret leaking vulnerability in a supported product? Do you have any examples?


Apples initial fixes for spectre/meltdown were for safari only. They did not fix the rest of the OS for some time.


Can you link to me where lockdown mode would prevent this kind of attack ?


Another good reason to allow other browser engines on iOS devices.


EU regulations may eventually force Apple.


If you're getting an error when trying to run:

defaults write com.apple.SafariTechnologyPreview IncludeInternalDebugMenu 1

Make sure your terminal has Full Disk Access and try again.


It must be time consuming to read other process' memory through such a side channel. Then limiting JS execution time for web pages should mitigate this vulnerability?

By default only small amount of js execution is allowed for web pages (small event handlers and such). If a page tries to execute more js, browser should ask user's permission to extend the limit. (Maybe several levels of the limit should be supported?). Some web pages could be added to a permanent list of trusted domains with permanently increased limit.

Upd: 4-5 minutes, in the first video (https://youtu.be/Z2RtpN77H8o?si=XB4oI9ner8pFTIqN) - see the time on the top right of their screen. When the attack starts it's 5:22, ends at 5:27.


ignoring people just clicking 'yes' on popup boxes... if JS execution was limited to 60 seconds, and the attack takes ~5 mins, limiting the attack to 59 seconds would still have a 1/5 success rate.

It doesn't close the vuln and just adds un-necessary friction for users.


Not if the exploit requires a minimum of 5 minutes.


This will likely just fatigue users, who will just keep tapping "yes" on the prompts to make the websites work.


Most web sites will not be affected. Also, a well designed notification dialog, with clear explantion and useful options - allow once, forbid, allow and add the website to trusted domains for N days, etc. - will help users.


The point is that most sites don't need that much intense JS processing.

Also, a "no, and don't ask again" button would be very useful in this case.


The end-user would have no idea if that much JS processing is legitimately required or not. "no, and don't ask again" would cause no-end of support issues.


I know one end-user who would very much want a browser setting for default compute limit for domains / pages, after which browser will start asking confirmation, as described in the above comments. Web pages could also ask for such permission upfront, using e.g. a Meta tag.

Such a setting whould also reduce malicious crypto mining possibilities.


>Finally, we demonstrate the recovery of passwords, in case these are autofilled by credential managers.

Can't wait for passkeys to replace passwords everywhere.


From a cursory review of the FAQs on the page it appears one mitigation might be to only keep one browser tab open at a time? They appear to be using timers and a cache eviction gadget to infer the state of other browser tabs/processes so it’s unclear what they can recover if you are not concurrently having a session to a particular site outside the gadget execution context. ???


They use window.open on a mouseover event listener to open another page. Even if you close it, they still are able to read from it as that memory isn't immediately zeroed or returned to the OS.


Besides windows.open I'd wonder if iframes could also be vulnerable if they launch in the same process.

Chrome and Firefox both support Out-Of-Process Iframes as part of their security setup; though I'm not sure if Firefox has it enabled by default yet. Firefox even drew some lovely pictures about it here: https://hacks.mozilla.org/2021/05/introducing-firefox-new-si...


> However, this mitigation is not enabled by default, and enabling it is possible only on macOS.

Is this not covered by lockdown mode on iOS? Crazy.


The paper mentions JIT multiple times, lockdown mode disables JIT. I hope someone else can confirm it but it appears it would be mitigated in this case.


It seems like JIT would make the attack easier as faster code will get more accurate timings but it doesn't seem to really stop the attack.


It's not just faster code, but also having more control over it. The JIT engine is more than happy to put a hundred data-dependent instructions right after each other; an interpreter makes this harder.


I don't know how to point out an improvement

> defaults write com.apple.Safari IncludeInternalDebugMenu 1.

if you get

> Could not write domain /Users/YourUser/Library/Containers/com.apple.Safari/Data/Library/Preferences/com.apple.Safari; exiting

instead you can check in the "develop" menu of safari , and section "feature flags"


At least for me, this was caused by a OS security permission: enabling Full Disk Access for the terminal emulator app in Security & Privacy (System Preferences) fixed it for me and the setting can obviously be reverted afterwards.


The website says that you can enable the “Swap Processes on Cross-Site Navigation” flag only on macos; actually on iOS you can access this flag via Settings -> Safari -> Advanced -> Feature Flags. I think this is the ios equivalent to the macos mitigation that the authors are suggesting.


That's a different flag. The website says you should enable "Swap Processes on Cross-Site Window Open"


It also seems to be on by default on iOS 17.1. It doesn't seem to on by default in MacOS Sonoma (14.1).


Is this mitigated with 25 oct updates https://support.apple.com/en-au/HT201222


Please Apple, stop supporting safari, even edge is a better browser


Another reason I use uBlock Origin to block Javascript by default, and why I don't use a password manager that autofills without user intervention.


I just want to say how comically bizarre the whole OS/Browser dichotomy/duplication has become.


window.open() strikes again


Does this mean you would have to visit a malicious website, that malicious website would open a different website with window.open(), and from this they can read data through this side channel attack?


Yes. But it's incredibly hard to pull off.

My understanding is that the theory of this attack was introduced with Spectre. iLeakage adds enough research to create a working proof-of-concept, plus acknowledgment that, as of last year, it wasn't addressed in Safari.

More on site isolation (the functionality that mitigates these attacks):

https://w3c.github.io/webappsec-post-spectre-webdev/ https://www.chromium.org/Home/chromium-security/site-isolati... https://blog.chromium.org/2021/03/mitigating-side-channel-at...


I think it's not that hard to pull this off as you can disguise this with a "social login" feature. E.g. imagine a website promoting raffles / free prizes if you log in with fb/insta/etc, there are way too many gullible users who'd use these pages.

I'd not be surprised if accounts used in troll farms /bots/ are stolen this way.


There have long been working Spectre attacks. From skimming the paper a bit, I think the contribution of this work is that they have come up with an attack that works on Apple's processors, as well as bypasses for a number of mitigations WebKit has (that fall short of site isolation).


That seems to be the implication of this attack. The mitigation referenced by the web page are being rolled out in newer versions of Safari and iOS, according to other comments in these threads.

The mitigation seems to be to split the process space.

From the paper:

     Attacking Gmail. With Google being one of the world’s largest email providers, it is highly likely for a target to be signed in with their personal account. By having the event listener inside the attacker’s page access execute window.open(gmail.com), we can consolidate the target’s inbox view into the attacker’s address space. We then leak the contents of the target’s inbox, see Figure 11.
     Recovering Android Text Messages. Android users can send and receive text messages from a browser window by pairing their phone with Google’s Messages platform. Thus, by opening Google Messages using window.open(), we can recover a target’s text messages without attacking their mobile phone itself.


What is the point of the dedicated vulnerability marketing websites? Like, for real, why do people buy a domain, configure dns, design a full webpage, setup some server somewhere?

Is there some secret world I don't know about that's driven by how banger your vulnerability disclosure presentation is? Every one anymore has a full site. Is this what it takes to get attention these days? Everything, including computer bugs, needs a marketing campaign? Every time I see these sites I roll my fucking eyes at how ridiculous it is that people keep making them, but it seems to only be increasing in occurances.

Can someone explain this to me because I feel like I'm missing something. Just feels like peak consumerism and attention economy bs that shouldn't be needed imo, but I hope I'm missing some crucial thing that makes these valid


It's because when a news website wants to talk about the vulnerability they get a webpage to link to that is canonical and has all the information someone would want to get an accurate description of the issue.


Isn't that exactly what https://www.cve.org/ would be for?


No, that's the place where you get an identifier for the issue. You still need a place with answers to frequently asked questions, more details as to how the attack works, artifacts and proof-of-concept code, etc.


CVE's usually link to the SCM issue/pull request where the conversation, and reproduction takes place. We've been finding and patching vulnerabilities for decades without the need for dedicated websites to inform folks, so that reason for these sites needing to exist doesn't make any sense.


On CVE I see a fact-oriented bug tracker style database of CVE issues with a schmorgus board link/reference barf on each CVE page, but on the OP site I see a really well presented (with videos, faq, paper) description of the issue? It does feel self-marketing yes, but it's entirely deserved if they found the issue?

I'm sure keen folk can digest SCM pull requests, but that population is a super minority I think to well presented content disseminated on youtube, sites, blogs, etc.

I don't think CVE being mandated as the only place vulnerability/conversations are had would be optimal, no?


> schmorgus board

I know HN frowns on grammar-policing comment, and rightly so; but I thought nonetheless you might like to know (and it looks so much more formal this way anyway!) that it's "smörgåsbord" (or the diacritics are commonly omitted in English).


Pronounced [ˈsmœ̂rɡɔsˌbuːɖ]


Yeah, that's not how Apple's CVEs work.


Bahahaha no, the vast majority of CVE’s are at best a vague description, a severity score, and no context.


Would you send your CFO there though? CTO sure, of course but there’s a whole word of non-technical people who need to consume this kind of information nowadays.


I wouldn't send my CFO anywhere because it has jack shit to do with him and if it ever did, my CTO who would be fine looking at a CVE website and would have zero issue distilling the problem down when HE talks to the CFO, since talking to people like that is part of the CTOs job


That’s certainly your prerogative!


Yes


The amount of time and effort they spent setting up the website – since this is 2023 and websites are super easy to set up – is probably dwarfed by the amount of time they spent on the vulnerability itself.


The time and effort isn't the concern, the precedent that a vulnerability isn't severe unless it has a marketing campaign these are setting is, and based on some comments here, it's taking place


They waited over a year before disclosing the vulnerability. Since Apple didn't fix it in that time, they're now relying on negative publicity to pressure Apple into fixing the issue.

And the problem here is that Apple and other companies don't address these vulnerabilities until someone forces their hand. That's why a "marketing campaign" is required, unfortunately.


This site regularly upvotes posts on X (nee Twitter) and Medium and other sites that actively nag me. It would take me less than 6 minutes to register a domain on njalla, pay in crypto, add the DNS to cloudflare, and upload some static files to a GitHub repo. And no nagware! Even if it is self promotion, I couldn't be bothered to whine about this when I'm inundated with far more bullshit from far more prominent, frequent actors on a daily basis.


I don't get bothered by people doing it because they are able to, or care how long it takes them to spin it up, good for them. The thing that bothers me is that disclosing sec vulnerabilities isn't a popularity contest, and as you can see from other comments here, people are gauging the severity of the vulnerability based on if it has a fucking marketing website or not. That's not a good pattern to keep enforcing imo


If people are actually judging the importance of a vuln based on "it has a domain" then my issue would be with them.

People have all sorts of miscalibrated heuristics with which they judge things. Regardless of the originators intentions.


Treat the problem not the symptoms. People can't use the marketing site to gauge severity if they don't exist.


Why is your question not directed at the people that bother you, if that is the case?

Seems rather unproductive to question why the site exists when your problem is with people using the existence of the site to make decisions.


My problem is with the sites existing, which is what my original comment is asking about. The people using them to guage severity is a symptom of the problem of these sites being a thing, and is naturally going to happen in a world like this. If the website marketing doesn't exist, the people using them to gauge severity won't be able to do so. Treat the problem not the symptoms.


If I had a website that posted a sci-fi story and people misinterpret the story as being real and factual, should I take my website down or should people refresh themselves on how to differentiate sci-fi from reality?

I would argue that the people who think the existence of a website is a good gauge for vulnerability severity need to have a refresher on how to gauge vulnerability severity. The inability/lack of education in how to assess severity is the root of the problem in my opinion.


I have no idea how to go about responding to your hypothetical question, people reading fiction and thinking it's real doesn't compare to people being conditioned to not care about sec vulns unless it has a dedicated website with pretty logos.

War of the Worlds was broadcast and people freaked out that aliens were invading. I don't think they tracked each of those people down and told them to not be so stupid, they started adding disclaimers to the broadcasts so people coming in late or whenever they would know it was a story and fiction. They fixed the problem not the symptoms.


Agree to disagree, I guess.


Everything seems to be a popularity contest now. I’ve seen people with the job title “Cyber Influencer”.

It could just be that the researcher is immensely proud of their work and wants to show it off with its own website, logo, clever name, etc.


They should be proud, and I'm not bothered by that one bit, the work they did is a great thing. This website existing or not should have no bearing over the good work they did or them being proud of the work they did.


> domain on njalla, pay in crypto, add the DNS to cloudflare

one of these things is not like the other


This is hosted on GitHub Pages, so it takes very minimal resources to setup and keep running. The domain is also likely $10, assuming they didn't need to pay a squatter for it.

I think it's just a trend for any any huge-scale vulnerability research team to put together a website for it, as that amount of effort will indicate a certain level of attention the exploit requests of the reader / the security community at large.

And it doesn't always happen. Log4shell, for example, was not its own website: https://news.ycombinator.com/item?id=29504755


> as that amount of effort will indicate a certain level of attention the exploit requests of the reader / the security community at large.

Feels like a dangerous way to gauge the severity of issues. What if the discloser doesn't have the funds or the skills to setup a dedicated website? Will it not get any attention since there's no yodawgiheardyoulike0days.com website to float up to the top of HN? This is what CVE severity scoring is for and what should be used, not the presence of a dedicated website, no?


CVE scoring is about as worthless as whether a vulnerability has a website.


Maybe, but it's at least an agreed upon system and a centralized database and format, which can be improved since a org is behind it with the goal to make sec vul disclosure the best it can be. The wild west of marketing websites isn't advancing towards any sort of shared understanding.


I am 100% on board with free and easy assignment of CVEs and a central database of them. I just don't think they are a good place for keeping vulnerability details, because it is too rigid. Having a link to relevant details is good enough, and if the link happens to just include everything in it then I'm fine with that.


"All crisis is profit." :)

Clearly we need Vulnerability Website as-a-Service (VWaaS).

         .
(joking or not? even i don't know! What I know is, there seems to be a fine line between cynicism and prophecy...)


You joke but I bet it's already in the YC fall 23 batch


> What if the discloser doesn't have the funds or the skills to setup a dedicated website?

Evidently they just get ignored, if their account of direct disclosure to Apple is anything to go off of.


It's a paper. Plenty of papers come with their own website. The purpose of a paper is to make its findings publicly accessible, so putting in the extra work for a website to accompany it is understandable.


maybe they want to maximize their career opportunities? It's gotten better over the years, but for a long time security researchers and white-hats toiled away and never even got a "thanks", sometimes they got legal threats. If I had the patience to work in this industry I'd want to maximize my returns.


> Like, for real, why do people buy a domain, configure dns, design a full webpage, setup some server somewhere?

Same can be said for personal blogs. The purpose is showing off. By the way, setting up a server nowadays is ridiculously easy and cheap. If you have done it a few times and you use a simple static site stack with CI then you can have a site live in 15-30 minuted including SSL and hosting for free (GitHub Pages, Cloudflare, or GitLab Pages for example).


Personal blogs, or any other website, have nothing to do with creating branding and dedicated websites to disclose security vulnerabilities. Computer bugs have logos.


Have developers really come full circle to "why do we make websites?"


Yes, that's what my comment was asking, why do we make websites, you nailed it, and I thank you for the concise way of asking what I took so many more words to ask above.


Yes, we dream of an alternate timeline with CVE promos on TCP/IP HyperCard https://www.wired.com/2002/08/hypercard-what-could-have-been...


In think its plausible that hosting it yourself ensures that companies can't exert pressure to have it removed.


it's a webpage promoting a vulnerability they found and a paper they wrote and is owned by georgia institute of tech. they ~~have servers~~ are just using github pages and a budget already i'm sure for marketing. this is marketing.


Nope you’ve got it in one


> However, iOS has a different situation. Due to Apple's App Store and sandboxing policies, other browser apps are forced to use Safari's JavaScript engine. That is, Chrome, Firefox and Edge on iOS are simply wrappers on top of Safari that provide auxiliary features such as synchronizing bookmarks and settings. Consequently, nearly every browser application listed on the App Store is vulnerable to iLeakage.

This should be a reason to lift this policy and allow different engines on these devices!


Because increasing the attack surface would somehow increase the security?


You have a laptop with a browser. You buy a laptop with a more secure browser. You have increased attack surface yet security is improved.

It is quite possible a native Chrome on iOS would be more secure.


Absolutely not. You now have a computer where you have chromium’s flaws for your daily internet browsing and Safari(or whatever native browser is on the OS)’s flaws for the native apps that use the native browser.

Yes indeed, you’re still free not to use these apps. But would you? At some point why not get a computer where the internet is the “OS” (a chromebook for instance……… where guess what? you cannot use an alternate rendering engine. Interesting, no?)


Technically you can run an alternative browser on ChromeOS under the form of an Android app, or by running a different one in the Linux sandbox.


In exchange for decreasing the amount of affected users and application? Absolutely. No one would be forced to use a non-Safari browser.

A software monoculture means that a bug for one is a bug for all.


But you’d also get apps that decide to use chromium for whatever reason outside of the user’s control, thus making these users vulnerable to chromium flaws…

In short, you increase the possibilities, you increase the attack vectors. There is no way around it AFAIK.


>outside of the user’s control

Using the app at all is in the user's control. The current state of iOS is that they don't have any control whatsoever.


Here’s an example: I know I have the control of not using youtube because I really dislike gougle. Would any of my family member? Absolutely not. Would they use their browser if they could in the youtube app? Most definitely yes.

So no, it is most definitely not in the user’s control.


> Would they use their browser if they could in the youtube app? Most definitely yes.

> So no, it is most definitely not in the user’s control.

You're comparing impulse control to hard runtime limitations. It doesn't really track; I understand your apprehension, but if none of your family members notice or care then maybe Google's hypothetical solution here worked? If that's an undesirable outcome for you, I think you should be lobbying for better alternatives instead of using it as a boogeyman to excuse iron-grip ecosystems. Two wrongs aren't going to make a right here.


> Using the app at all is in the user's control.

Not if it is mandated by work, home, family, government, etc.


If it's mandated then they never had the control there to begin with.


> A software monoculture means that a bug for one is a bug for all.

That's true, but the situation is not improved by a Chromium/Blink monoculture. It's the same problem with a slightly different flavor.

So yes, iOS should be opened to third party engines, but at the same time steps should be taken to stymy Chromium's dominance.


Brave on iOS can disable Javascript on all web pages except those you trust by opt-in.


They should allow different engines, but this isn't a reason. Different browsers have different vulnerabilities, but aren't substantially more secure as far as I'm aware.


But it's for your security! Not joking: https://news.ycombinator.com/item?id=21587191


unironically, having three browser engines is three times the attack surface, what's the problem with that claim?

uarch "multiculture" hasn't saved us from architectural attacks, actually it probably increases the total number of vulnerabilities, and browser multiculture won't magically make them all perfectly secure and perfectly implemented either. if each browser is only 99% secure now you have 0.99^3 total security, you have ~tripled your odds of a vulnerability existing in at least one of your apps at a given time.

there are other arguments in favor of sideloading, but, I don't really see how multiple browsers is a security improvement, actually it seems unironically much worse on that front, since now you are depending on three teams of engineers (two of which are not even at your company) to execute perfectly and never have a vulnerability, in what is one of the highest-privilege applications (essentially the canonical "full control" app). People want their browser to have access to location info (thus bluetooth/wifi settings), camera, camera roll (thus long-term location history), microphone, everything. The fewer applications that exist like that the better you are.

I can't fathom anyone saying that they should, for example, run three different high-privilege pieces of software in their production systems, when one would do fine - f.ex you wouldn't run nginx, apache, and keycloak all mixed into your environments. That would obviously inflate the risk of being subject to at least one attack. Why is the browser different?


Because you are not running all of them at the same time, you are only running one of them. The one you choose to run can be better than the current one you are forced to use and thus your attack surface has decreased because you are not using the worse ones.

Having options does not reduce your security except in-so-far as exposing the underlying mechanism allowing choice increases your attack surface, and even then that does not inherently reduce your security. A mechanism allowing multiple implementations requiring more available attack surface, but which is used by a high quality application to provide a highly secure implementation is still better than a reduced mechanism designed to only allow a single application when that application provides a low quality implementation.

Also, the argument you just proposed could just as easily be used to argue that we should disallow any other operating system other than Windows 3.1 since having more operating systems just increases the attack surface. That is patently absurd for the reasons I just stated above and is why your argument is fatally flawed.


> Because you are not running all of them at the same time, you are only running one of them

This is not true. The moment Apple allows different browser engines, my Gmail app would use blink. As a browser, I’d maybe use Firefox/gecko and all Apple apps would still use the embedded WebKit.

Yes, this is my choice and I would do it knowing I’m increasing my attack surface, but apple‘s reasoning is not false…


A given application is still not using more than one browser engine. If there is a vulnerability in Webkit and all apps have to use Webkit, all apps are vulnerable. If only a third of apps use Webkit, only a third of apps are vulnerable. A different third of apps might be vulnerable if there is a vulnerability in Blink. When the security record of each browser engine is comparable, this isn't a net increase in exposure, it just averages out to the same. When the others have a better record -- and Google and Mozilla have both introduced a number of novel security and privacy features -- then the net exposure goes down.

Meanwhile having the choice is a security advantage because a) the user could choose the one with the best security record, whether or not it's Apple's and b) if there is an active vulnerability in Safari today then the user can use Chrome or Firefox today, and then do the reverse on the day there is an active vulnerability in Chrome.

The main concern people seem to have with this is the one which is also caused by Apple -- apps might embed a browser engine and then if it's vulnerable you have to update lots of apps. But this is only because of their lacking support for independent libraries. If the Firefox browser engine was provided as an iOS library by Mozilla then Mozilla would update the library and every app that uses it would get the update at once. That problem is only caused by this not being supported.

And is a problem that extends to more than browser engines. Apps can't use their own browser engines, but they might incorporate some common third party code that doesn't require JIT compilation, and then if someone finds a vulnerability in that code you still have to update a zillion apps. Specifically because the code isn't distributed as a dynamic library by its developers and instead gets copied into each app independently -- which not only impairs security but takes up more storage and memory to have multiple copies of the same code.


> A given application is still not using more than one browser engine.

That doesn't seem true, I can easily imagine an app that's based on Firefox but can still cause a WebKit page to open, you just need a system API that uses WebKit.

> If the Firefox browser engine was provided as an iOS library by Mozilla then Mozilla would update the library and every app that uses it would get the update at once.

That's not how the app update lifecycle works, they're all independent. (Otherwise they'd break a lot more easily.)


> That doesn't seem true, I can easily imagine an app that's based on Firefox but can still cause a WebKit page to open, you just need a system API that uses WebKit.

It's still not using two different browser engines for the same purpose. This is no different than having two different apps that each use a different browser engine. The attacker needs the app to be using the exploitable browser engine in the context where they can deliver an attack payload, not in some other context.

You can also improve this situation for system APIs by making the API open the page using the user's default browser instead of one hard-coded by the system or the app.

> That's not how the app update lifecycle works, they're all independent. (Otherwise they'd break a lot more easily.)

Breakage isn't common when systems implement this properly. When you get the new version of libssl from apt, all the packages that depend on it get updated and it's unusual for any of them to break.


I’ve worked on Android apps that embed a browser engine but also use native web views. I doubt it’s rare. They’d exist on iOS too, if it were possible.


> If the Firefox browser engine was provided as an iOS library by Mozilla then Mozilla would update the library and every app that uses it would get the update at once. That problem is only caused by this not being supported.

We don't want to go back to DLL hell, do we? History has shown that this approach does not scale, and definitely not on mobile.


The ancient Windows implementation of this was flawed because it doesn't use versioning. Newer systems do it sensibly: If two versions of a library are incompatible with each other then they can be installed in parallel.

Meanwhile you still don't need a thousand versions installed because the library only has to declare a version compatibility change if some part of the API is removed. Otherwise a newer version of the library will implement everything an older version does and only make additions or bug fixes.

Then you have some app which needs library version 2.3.4 or higher, some other app needs the same library with version 2.3.5 or higher, so the system sees those dependencies and installs version 2.3.9 which is backwards compatible and can be used by both of those apps. An old app needs version 1.2.0 or higher, which isn't compatible with 2.3.9, so the system installs version 1.2.15, which is. Then you have two versions of the library installed alongside each other instead of the three versions you have now -- and the two versions you have installed are both still maintained, instead of having apps that quietly statically include versions 1.2.0 and 2.3.4 which each have a big fat CVE, and 2.3.5 which patched the security vulnerability but still has a couple of inconvenient bugs you don't need.

This is all done by a package manager which uses the list of dependencies for each app to take care of it for you when you install an app. This has been a solved problem for decades. But mobile fails to implement the known solution and instead sticks you with inefficient manual updating of every individual app.


> The one you choose to run can be better than the current one you are forced to use and thus your attack surface has decreased because you are not using the worse ones.

you seem to think that webkit is the worse one here and that having for example a blink based or gecko based browser would means that these kind of bugs don’t happen.

that is unfortunately just wishful thinking. just read through the release notes of Chrome and Firefox and you will see that they all fix security bugs for every release.

all browser engines are of extremely high quality. but they also all keep having regular and security bugs.


Maybe three times the browser engine, three times the chance to have a safe engine at the end?


As far as I understand, The attack surface will be reduced in the end. Here is why: The amount of processed content is the same, no matter if you use one browser engine on single device or many. So if we assume that browser is 99%, the chance to _not_ hit the vulnerable page is 99%. However, by segregation of browser data between engines, the exposure of confidential information is reduced in case of breach


> unironically, having three browser engines is three times the attack surface, what's the problem with that claim?

If only one third of the users run a vulnerable browser, the other 2/3 would be safe. Security through compartmentalization.


What next? iOS security vulnerability? This should be a reason to lift this policy and allow different operating systems on these devices! /s


If Europe's governments weren't so reliant on Apple's surveillance, maybe their regulators would demand that.


Lol what? You can't just drop a bomb `Europe relying on Apple's surveillance` without any details


https://en.wikipedia.org/wiki/Five_Eyes

> In early 2014, the European Parliament's Committee on Civil Liberties, Justice and Home Affairs released a draft report which confirmed that the intelligence agencies of New Zealand and Canada have cooperated with the NSA under the Five Eyes programme and may have been actively sharing the personal data of EU citizens. The EU report did not investigate if any international or domestic US laws were broken by the US and did not claim that any FVEY nation was illegally conducting intelligence collection on the EU.

For reference, this is the same NSA that has boasted about having inroads at companies like Google, Microsoft and Apple. The same FIVE EYES that recently "somehow" found the damning evidence to accuse India of conspiring to kill a foreign dissident.

Europe relies on America's surveillance network, and America's surveillance network relies on ___________.


Europe's surveillance network. They're all sharing information with each other.


So, Apple is letting secrets from one origin be in the same OS process as running code from another origin?

Isn't that shoddy security architecture 101?


I'm not entirely sure about this. The paper mentions that in Chrome and Firefox "different rendering processes handle pages with different effective top-level domain plus one sub-domain (eTLD+1)." (Meaning, windows in the same eTLD+1 group still share a process.) To proceed,

> "Taking this approach a step further, Safari follows a simple one process per tab model, where two web- pages are never consolidated into the same rendering process, even under high memory pressure and even if they share an eTLD+1 in their URLs. Instead, Safari spawns a new rendering process for each tab until the system runs out of memory."

This suggests that Safari/Webkit is even more hardened in general. It's only in the context of `window.open()` that this isolation strategy is defeated. Notably, `window.open()` somewhat implies a shared context between the calling window and the newly opened one, since both windows receive a direct reference to the respective other one. I can't see any description, how other browser engines would handle this differently and would achieve perfect isolation, or, in case these were explored in similar depth, might yield similar vulnerabilities.


Yes. Even without spectre, this would let anyone with any webkit exploit get secrets from any other website.

Browsers with sandboxed multi-process architectures have been around since 2008, precisely because experts realised that rendering engines are so complex they cannot reasonably be secured, so need to be sandboxed for protection.

Unfortunately, I suspect that security experts within Apple were well aware of this, but were overruled because iOS devices tend to not have much RAM, and the user experience would be severely degraded by doing proper process-per-origin isolation, due to RAM exhaustion.


Chrome on Android also opted not to deploy their version of full per-origin isolation for the same reason. However Chrome does create a new process for cross-origin navigation, which is sufficient to protect pages which disable iframe embedding. That's what Apple missed here.


Safari has protection against cross-site navigation enabled by default. The issue here is cross-site window open.


That is a rather odd self-socratic method of commenting.


I usually do it because I want to pose a question and then give one viewpoint of answer to that question, while leaving the floor open to other viewpoints/opinions.

If done well, it leads to a better comment-reading experience. Not sure I did it well in this example though.


It ends up looking like sockpuppetry gone wrong and I think it kind of gives you an excuse to pose the opening question in a (however inadvertent) flameframy, scarecrowy way.


I like the style, your comment created a clear train of thought and context


No. On page 9, Section 5.1 they state that by default Safari will spawn one process per tab and they provide less consolidation by default than Firefox or Chrome. It is only window.open(), which is used to create popups, that was not updated from the old design that did not use isolation to the new model that does support isolation. The "security experts" at Apple were just too incompetent to audit their code base and fix all the known security holes.

I mean, this is not exactly shocking coming from the same company with "security experts" that released a version of macOS that allowed anybody to login to root with any password [1]. Their security review process is grossly incompetent. At some point you should stop believing the "security experts" who do the security equivalent of putting antifreeze into the ice cream because they ran out of sugar and the antifreeze tastes sweet so it must work just as well.

[1] https://arstechnica.com/information-technology/2017/11/macos...


So what we've learned once again is: running random code off of the internet is a bad idea... Wonder if we'll stop doing this at some point?


Be ready to say "playing random video off of the Internet is a bad idea" if someone decided JavaScript has to be gone.


I would have assumed the range of things you could do in video would be limited by codec. Can you provide links that explain how video can achieve similar feats?


Font: https://developer.apple.com/fonts/TrueType-Reference-Manual/..., there's a lot of exploits in various implementations about ten years ago: https://security.stackexchange.com/questions/91347/how-can-a.... There's still active ITW exploits in 2023: https://googleprojectzero.github.io/0days-in-the-wild//0day-...

Image: The famous JBIG2.

Video: https://en.wikipedia.org/wiki/Stagefright_(bug)


To me the danger seems to be not in the randomness of the code, but that it can run without the user's prompting or consent. Perhaps it would be a good idea for browsers to disable JS JIT by default (as this is where a large percentage of holes crop up) and allow the user to enable it where they find the benefit low-risk and worthwhile (e.g. with vetted web apps).

The downside is that the web is a bit slower by default, but that might not necessarily be such a bad thing and encourage more developer consciousness of how much JS they're pulling in and how resource intensive it is.


Just disable JIT and WASM.

I've been browsing that way for... oh, at least a year, probably more, and I don't notice a difference. The world isn't actually Javascript benchmarks (which suffer horribly, running the same hot loop over and over), and I seldom notice the performance difference.

It's just the default in my template for web browsing Qubes, along with disabling a few other things.

If you're on an Apple device, ignore Apple and enable Lockdown. You don't lose much (some image formats, occasionally this is annoying), and you gain serious robustness against a huge wave of attacks.


A little known fact is that you can disable lockdown mode per site and per app (e.g. you can disable lockdown mode for obsidian to make it work properly since it is web based).

So actually it isn’t disabling JIT, it is making JIT opt-in.

You also lose custom fonts on sites, but that is more feature than bug these days.


Thats a great tip, thanks! As a sidenote, how do you sync Obsidian on iOS?


I use the obsidian sync service. I have been completely satisfied with it.


A little extra attack surface won't stop the vested interests from making sure that their code (tracking, ads, whatever other user hostilities) continues to run.


Hit F12.

Unless you're using NoScript or something similar, you're doing just that right now.


Why does a website about a security vulnerability in a JavaScript engine sabotage the security mitigation of disabling JavaScript, by requiring it for collapsible sections? As if they couldn't just use <details>.


No joke! It's amazing how much better protected you are by not allowing javascript and other active content by default but websites are dead set against letting people do that without massive inconvenience because for some reason displaying even basic text and images has become unthinkable without requiring a bunch of third party remotely hosted JS


Javascript should stop executing after body.load.

And only resumes executing when user clicks/touch some specific UI elements.

Browser should NOT be a generic application container. The browser was designed for "browsing" after all.


Interesting


All auto-password filling on iOS requires 2FA so Apple doesn’t have to fear or how it’s to explain that Apple hasn’t mitigated this attack vector yet?

For websites, leaving the site auto signed-in seems the more practical way to exploit the vulnerability, so don’t leave sensitive site auto signed-in and use a native app for them instead is the natural way of mitigation?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: