Hacker Newsnew | past | comments | ask | show | jobs | submit | hashseed's commentslogin

Navigate to a page, right click > inspect, open the What's New panel in DevTools.



Hi. I'm the V8 engineer who implemented the fix and wrote the blog post.

The reason the initial bug report [0] was marked as WAI but the Medium blog post got a lot more traction is very simple and much more human.

When the initial issue was filed to the chromium bug tracker, it was routed to Chrome's security team. From a security perspective, Math.random() provides no guarantees about cryptographic safety, so naturally it was marked as WAI – nobody from the V8 team actually saw this issue. One of the suggestions "3. Make crypto.random(size)" was acted upon though, and so crypto.getRandomValues() was introduced in Chrome 11, about 10 months after the issue report. In terms of specifying and introducing new Web APIs, this was incredibly fast.

After the Medium blog post was published, someone filed an issue directly to the V8 project. It did not question the spec compliance, but pointed out that the PRNG quality could be better. That nerd-sniped me into researching this topic and after reading a few papers, I implemented the fix with xorshift128+. I'm thankful that my team lead allowed me to set aside some time to work on this even though it was not on our project roadmap.

Performance regressions that could affect our benchmark performance was of course part of the consideration, but there were many options to avoid a performance regression. In V8, crossing the boundary between machine code compiled from JS to C++ runtime builtins is fairly expensive. I could have either ported xorshift128+ to assembly so that this boundary crossing was not necessary, or I could have amortized the boundary crossing cost by buffering multiple random values. I chose the latter because porting to assembly is error prone and I would have to do this for each platform. There are better options in V8 nowadays, by expressing the algorithm in an intermediate representation, but back then that was not available.

[0] https://g-issues.chromium.org/issues/40404440 [1] https://developer.mozilla.org/en-US/docs/Web/API/Crypto/getR... [2] https://g-issues.chromium.org/issues/40643979


I work on Chrome DevTools.

The reason is absolutely security concerns. You don't want to leak a function that can expose a list of all objects on the JS heap.


I don’t understand. The page can’t access these things, only dev tools, so any action to expose it would still have to be mediated by user action; and even then, what’s so bad about exposing this? Everything in it is scoped to the document, and if it can expose things you don’t want exposed, then so can getEventListeners(), right? Yet getEventListeners returns an actual value. What’s the actual security problem of being able to list all objects on the JS heap?


Chrome sets navigator.webdriver to true when controlled by automation.

Until now, bots could simply use headful mode to achieve the same effect that is now made available through the new headless implementation.


I manage the team at Google that currently owns the Puppeteer project.

The previous team that developed Puppeteer indeed moved to Microsoft and have since started Playwright.

While it is true that staffing is tight (isn't it always), the number of open issues does not tell the full story. The team has been busy with addressing technical debt that we inherited (testing, architecture, migrating to Typescript, etc) as well as investing in a standardized foundation to allow Puppeteer to work cross-browser in the future. This differs from the Playwright team's approach of shipping patched browser binaries.


> The team has been busy with addressing technical debt that we inherited [...] migrating to Typescript

Wow, not writing stuff in TypeScript is now considered technical debt? I knew people were already rushing to rewrite everything in TypeScript if they could, but didn't knew we'd come this far along the hype-cycle already.


Yes definitely. I've worked at two companies in three years spanning 250,000 employees and both companies consider writing JavaScript deprecated in favor of typescript.


Perhaps because Typescript is a Microsoft baby?


GP manages puppeteer team at Google


I used Puppeteer on a project recently to generate some really big and complex PDFs that would have been a massive pain to do any other way, so thanks for your work, and I'm very happy to hear that the project isn't dead.


Glad to hear that. Puppeteer still has a number of compelling things over Playwright (like not shipping patched binaries) so I hope competition in this space can continue to happen :)


> This differs from the Playwright team's approach of shipping patched browser binaries.

Can you expand on that?


"Each version of Playwright needs specific versions of browser binaries to operate." [0]

They patch and compile browser binaries so they have the functionality Playwright needs.

Their build of Chromium is one release ahead of what's out but it looks like one could maintain a library of older Playwright browser binaries to test with. They probably have an older Firefox 91 binary that's feature-equivalent to the current Firefox ESR. Their WebKit builds won't ever be exactly the same as Apple Safari.

[0] https://playwright.dev/docs/browsers


There is a lot of content on V8's dev blog, with different depth, all pretty well written: https://v8.dev/blog


For sure the blog is great! But I’m thinking something more along the lines of “here’s how you’d build something like this” or “here’s the stuff to read to get started on a project like this”.


JIT engineers are mostly oddly specialized compiler engineers, so you're really looking at learning how optimizing compilers work as a prerequisite.


Impressive numbers. Did you try to use startup snapshot in V8 to improve TTI? https://v8.dev/blog/custom-startup-snapshots


V8 already employs W^X, i.e. memory pages allocated for V8's heap are either writable or executable, but not both at the same time.


By allowing JIT at all, a small ROP chain can call VirtualProtect to make a larger payload executable.

Sure you can do everything with ROP, but it is less convenient (and Intel CET might eventually make ROP attacks actually hard).


Well, except for WebAssembly. But even then, it's still fundamentally possible to hijack control of whatever changes the pages from RW to RX.


You were already able to do this by loading any other kind of cached resource.


While true, I was under the impression that there wasn't a cross-domain cache that wasn't opt-in. Again, though, maybe this is per-domain so it's moot.


Simple cross domain <IMG> tags can have their load time measured..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: