Hacker News new | past | comments | ask | show | jobs | submit login
V8 release v6.8 (v8project.blogspot.com)
174 points by stablemap on June 21, 2018 | hide | past | favorite | 72 comments



> The optimizing compiler did not generate ideal code for array destructuring. For example, swapping variables using [a, b] = [b, a] used to be twice as slow as const tmp = a; a = b; b = tmp. Once we unblocked escape analysis to eliminate all temporary allocation, array destructuring with a temporary array is as fast as a sequence of assignments.

I enjoy seeing updates like this to learn more about how the engine works, but: these changes are exactly the reason you _don't_ want to design your code around specific micro-optimizations, particularly when it comes to JS. They're at parity now and it is quite possible that 6 or 12 or 24 weeks from now, the cleaner destructuring approach will actually be faster than the tmp-var approach. Remember this the next time someone suggests in code review that you should design your code to meet the whims of the JS engine.


I'd have to be running that billions of times a day before I'd start to care about the extra nanoseconds. Clean-looking code makes me feel better than having my servers sit at 67.3% CPU instead of 67.4% CPU.


Yeah even on the days when serious applications on the 16 bit platforms were written in Assembly I hardly felt the need to disable bounds checking in Turbo Pascal, except in cases when the solution was to re-write the function in Assembly anyway.


I'd take cleaner code for half the speed quite often!


I see that you too are a Python developer! /s


There’s a serious point: cleaner code makes high-level optimizations easier and those are usually far bigger wins. I’ve seen more than a few times where Python, Perl, etc. beat hand-tuned C because the C used so many clever tricks that nobody wanted to change it.


I absolutely agree! I use python like 70% of the time, and accessibility is one of the big reasons for that.

Just making a joke.


Yeah, wasn't criticizing you. I think one of the signs of experience is when people go from just laughing at that to laughing and thinking more deeply it.


How is a language which doesn't have a type checker a synonym for clean code?


Python's readability helps a lot — I've seen a ton of code in statically-typed languages where a bug was obscured by the syntactic boilerplate and nobody noticed that, say, they were calling the correctly type-checked function with the wrong variable, casting it incorrectly, etc. If the language or popular frameworks encourage boilerplate that's especially effective at getting people to skim over important details.

Newer languages with solid type inference help a lot for reducing that cognitive load, so this isn't saying that one side of the static/dynamic-typing debate is wrong but rather that it's an important design factor for programmer productivity.


How is a type checker or the presence of types synonymous with clean code?

The two are totally orthogonal concerns.

You can write highly obfuscated spaghetti code from hell in any type checked language you wish...


Python does a lot of type checking and complains loudly, very different from eg Javascript.


Besides, whether it complains or not about misused types, says nothing about the code being clean or not.

One can write totally unreadable code in Scala and a totally readable version of the same algorithm in Python if one so wishes.


Check out mypy. It works well.


If you hit a hot path the impact can be way higher. Especially if you're running client-side and want a fluid experience. Example: https://github.com/facebook/react/pull/12510

But definitely agree that cleaner code > performance in most cases. First make sure you really do need to squeeze that performance out.


0.01 is kind a big deal if you do that a few 1000 times though right ?


0.01 is the resulting speedup in CPU usage (overall performance). It factors in the call count.


This is what I meant. I suspect the 0.01 probably encapsulates ALL these little sub-optimisations I decide on in a code base, rather than just this one variable swap. CPUs are a lot faster than most people realise. It's usually other things that slow down a program, not whether the programmer uses array.forEach or a basic for loop.

Disclaimer: I do tend to use for loops a lot since they can be quite readable.


Whatever leads to 60fps on client-side is welcomed.


> Remember this the next time someone suggests in code review that you should design your code to meet the whims of the JS engine.

I think the best approach, if you have the discipline to follow it, is to normally code for ease of understanding ("clean code"), and occasionally fall back to performance hacks in performance critical code, but don't stop there. The following will likely make you both much more confident that your choices were correct, that you can easily reverse them when needed at little cost, and that you'll know (or be able to easily query) whether that time has come.

1. Mark each performance hack with some identifier (e.g. a comment, /* PERFHACK #4632 / or / PERFHACK #4632 - destructing is slow */.

2. Maintain a test in the same repo which confirms this assertion, by actually testing the relative speed of each approach, expecting the hack to be faster be some margin. Each test should contain the PERFHACK identifier and optionally a description.

3. Occasionally (or along with regular testing) run all these PERFHACK tests, looking for failures, which would indicate the performance is not a certain percentage greater than the simple case.

4. To fix the failing test, reverse the test assertion, and search the codebase for instances of that PERFHACK identifier and fix those as well.


That sounds like a pretty fragile test - could you see it failing if I plug my laptop into AC power while the test is running? (Intel SpeedStep or whatever it's called will take my laptop from 2ghz to 3ghz when I do that).


> could you see it failing if I plug my laptop into AC power while the test is running?

That should at max cause a single test case failure, as it caused a speedup during one leg of the operation benchmark.

Since these are not completely automated (you might want to have them as a set of allowed to fail tests, or tests you have to explicitly request), you can easily just re-run the tests to confirm the prior output wasn't a fluke. You need something like that anyway because you're essentially testing benchmark outputs, and it's not like it's easy to set up perfectly consistent benchmarks in the first place.

The margin of margin you want your PERFHACK to be faster than the default code is also your margin of error. Theoretically you would test something like NORMAL_SECONDS*0.8 > PERFHACK_SECONDS in your test to ensure you're still getting a 20% or more speedup. If some changes cause it to drop to 15%, you then get to assess whether that gain is still worth the hack (and perhaps change the test to be a 10% gain assertion), or whether you want to clean up your code (or at least schedule it).

The point is that you've put a system in place that allows you to have a clearer picture of whether your prior decisions with regard to performance hacks still make sense.


I think that kind of test would be more useful on CI hardware that’s usually pretty consistent but it’s a good point to raise.


Sure, the tmp-var-approach could be slower in some weird edge cases but probably only slightly, while the array-swapping approach could be much slower if the JS-Engine doesn't implement this optimization at all or this code-pattern is run in interpreter or baseline compiler.

Not arguing you should do such micro-optimizations, but generally writing "simpler" code should also allow less sophisticated compilers to generate fast code.


Depends on if you want your code to go fast right now or if hypothetically going faster maybe sometime in the future perhaps is good enough for you.

There's a good argument for both situations depending on context.


Or there's a third option, which is that the difference in performance doesn't matter for your application, which is the case the vast majority of the time. Destructuring an array is not going to be a bottleneck for almost any piece of code.


Very excited to see more improvements with V8. I am worried however about how good google chrome is and how it completely dominates in some markets. I hope that Firefox can continue to improve (Quamtum is great!) and stay relevant especially since Electron Apps are popular and it’s not certain that some day they won’t include Google telemetry.


Chakra seems decent too. And aren't V8 and Chakra both open source? Does Google improving V8 increase the size of their moat?

V8 improvements also benefit NodeJS, and they also contribute to it, I think.

I use both Chrome and NodeJS and will take any speed improvements they make.


It certainly increases the size of their influence.


I want to see something like Flow integrated into the js specs. Types can help optimization just as they can help correctness!


Possibly! But trickier than it seems at first.

For types to help with optimization, those types have to be correct and reliable (i.e. sound). If every line of your app and all of its dependencies are sound, then you should expect a speedup. If there is any unsound code, then you'll have to generate runtime checks to preserve the soundness properties that your optimizer assumes, and those can actually result in _slower_ code.

Even Flow has made compromises that are good for a type checker, but likely unacceptable if an optimizer is relying on them. Quoting https://flow.org/en/docs/lang/types-and-expressions/

> Soundness is fine as long as Flow isn’t being too noisy and preventing you from being productive. Sometimes when soundness would get in your way too much, Flow will favor completeness instead. There’s only a handful of cases where Flow does this.


There are stuff that flow doesn't even handle such as inline destructure with types. Imagine how complicated it gets at the engine level of parsing the syntax.


i like the improvements made to typed array sorting


I can't wait for web assembly to get direct access to dom. Javascript cannot be gone soon enough.


There's probably billions of lines of Javascript code out there, code many people paid a lot of money to be written. Javascript isn't going anywhere.

What might happen instead is that there will be a sudden burst of front end languages running on WASM, which will allow some front end people to completely avoid Javascript.

It's hard to predict what the adoption of those languages will look like, there's a million factors in play and few of them are technical.


Most of said languages are already adopted. You can expect languages running server-side to target WebAssembly first.

The other great thing is that WASM libraries written in all sorts of languages will be available to all of them.


There are still problems with WASM in the areas of multithreading, shared data structures, and memory barriers (think advanced/concurrent GC implementations). Because of this, languages do not automatically, cleanly and efficiently map to WASM yet.


Most of the pre-existing server side come with their own baggage. PHP powers most of the internet, just saying :)


I don't have a problem using good WASM libraries that were written in PHP. To target WASM, a PHP compiler would have to respect the WASM semantics which is what I will consume.


For a dynamic language to run on WASM, you would have to bundle its runtime with your application, which is probably a deal-breaker for many use cases.


I like javascript and think a lot of other people do as well so it probably won't go away just because of web assembly. I don't see too many people writing c# in browsers. I worked with c# backend and javascript front-end for 4 years and got sick of having to create classes for no other reason than because c# needed a type and other things like that.


The independence WebAssembly brings it that you'll still be able to use JavaScript whether or not you are using a compiler targeting WASM and giving everybody else the liberty of that choice. What's also great, is that even if you stick to your guns, you'll be able to use libraries written in other languages targeting WebAssembly.

And finally, to appease your OO concerns, in fact... I became progressively annoyed by the state of the web as it seemed to make heavy adoption of OO patterns (thinking angular). The ML family of programming languages has offered a much better software development experience in typed programming languages since the 1970s. There are already several compilers for those languages targeting JavaScript which are only waiting to target WASM. (WebSharper for F# & C#, ocsigen for OCaml, Scala.js for Scala, I'm sure plenty others).

JavaScript was conceived as Lisp with all the good parts removed. Note that ClojureScript is a delivery of Lisp that target JavaScript. I'm not a Clojure developer myself but I would have to agree that Clojure developers seems to be the most productive programmers out there.


There are some fundamental issues with running dynamically typed languages on top of wasm. Those issues can be resolved somewhat, but there's quite a bit of work to be done before they can be supported.

For basic support these languages need support for GC - some degree of stackwalking if they don't want to use shadow stacks. That support should be coming soon.

But you can't get around the fact that you need to ship a runtime to support these languages, so you're not just shipping your code, but the runtime required to run it.

But a runtime alone doesn't give you fast execution - you need to jit. And JITs require a lot of deep access into the architecture. You need to be able to generate and inject executable code into the program. You need to be able to patch code at runtime. For security purposes you'd ideally like a separate process that manages the codegen to provide fast w^x support for jitcode.

For the forseeable future, any real dynamic language support on the web will require direct runtime support, namely the regular JS engine.

Disclaimer: I'm a JS JIT developer at Mozilla, have worked on the IonMonkey optimizing compiler and did a good chunk of the design and implementation of our baseline JIT.

I'm excited at the prospect of wasm being a portable, tight runtime that'll eventually be a good cross-platform target for writing dynamic language, but there is quite a bit of infrastructure required within wasm before it can support that.


Oh and I forgot to mention: Elm (also an ML), TypeScript and what already looks to be Rust/Go/C/C++ already targeting WASM.


There must be a cost involved in everybody using different languages even though you can use the libraries of any language.

For example, the API between your language and the other language is probably not going to match your languages style. You see it all the time with APIs that seem to be written for java but are in another language.

If the library has to write different APIs for all the different langauges, then that is an additional cost.

Then there is the fact that less people will know a "standard" language, because they will all be using different languages, so they can't contribute back to the libraries that they use so easily. For example, if I am using a scala library and want to change something or commit a bug fix or whatever, I will have to learn scala.


I will grant there is some truth about your first point however I have some doubt as to the extent to which it'll be a problem for two reasons. The first is that WebAssembly will narrow down the variability of programming language X with respect to Y. The second is that I've consumed several JavaScript libraries from F# (using WebSharper) and always found my out. In particular, I've seen all sorts of libraries adapted to Angular directives (which I think relays some of the problems) so we know people can learn to handle it.

Your second point is fair, however I think programmers tend to stick to their "favourite programming language" like a religion and fail to appreciate that learning a new programming language can be done in 2 weeks while learning the ecosystem of libraries of a platform is a multi-year endeavour. In particular, you'll be able to carry your knowledge of all the WASM libraries you've learned to love when you decide to make a switch from say JS to XXX.


I think you're very lucky if you can learn a new programming language in two weeks.

I've been writing Python for longer than I've been writing C#, but I'm nowhere near as productive in Python.


Your two statements don't directly seems to support each other, I'm not sure if I should interpret them separately.

As for the second, there's zillions of things that can explain that and I'm definitely not clear that your being less productive in Python than in C# has anything to do with how much time you've spent on either. Which is the whole point, programming languages aren't equal. Neither are libraries.

If you set aside the time to learn the libraries and narrow it down only to syntax, that certainly the time to learn it will vary from person to person but isn't a multi-month task for anyone. I think you are scoping it in the larger context of non-shared libraries.

The best example I could give would be learning VB.NET when you already know C# (or vice versa).


I'm a C# web dev mostly - I have the opposite concern to you. I ended up using the module pattern a lot to get the same kind of structure in my JavaScript as I have in my C# code. If I'm hitting a WebAPI backend, I want the "frontend" boilerplate to match pretty closely.


There are already several libs that compile to WebAssembly and let you manipulate the DOM, e.g.

https://github.com/mbasso/asm-dom

You can already call out to small JS code snippets that work on the DOM from WAM, and since DOM manipulation is slow anyway, the small overhead you get for calling into JS won't make a difference. So I'd say, go ahead and manipulate the DOM from WASM, even if WASM supports direct DOM access later it won't make much of a difference (apart from slightly less and cleaner code in the DOM-access library).


Java didn't go away when the JVM got invokedynamic. JS will be the default language of the web for some time - although it's great that other languages can now be citizens.


But Java is an actual well designed language. A bit verbose yes. Lacking generics early on, yes.

JavaScript however is in a way similar to PHP and I'd say more dangerous as, unlike PHP it doesn't have a ugly syntax to warn you.

I've been working in the JS ecosystem lately and suddenly crazy knowledge from my life as PHP programmer turns out to be useful:

- adding statements above a function or a class? No problem: will be run on loading the file just line in PHP.

- == not checking if things are equal? Just use === like in PHP

Thst said I'm really impressed with the ecosystem. It is just JavaScript the language that needs to be replaced.


Multiple downvotes, no correction.

I guess that is either the "shoot the messenger strategy" or what I said was so horribly wrong I should have seen it myself.

Whoever wonders which one is true can verify for themselves that except for the syntax modern PHP and JavaScript are surprisingly similar: )


So my estimate is that this is going to happen approximately never.

The challenges to offering a direct DOM api are numerous. The DOM API uses the full complement of javascript types; whereas WebAssembly only has integers. So you need to either standardize a binary representation of javascript objects, arrays, functions, pointers, etc., or you need to modify all the hundreds of API calls to use different semantics, e.g. a C-style string for a node name, a wasm function-table index for a callback.

References of all kinds would be a huge sticking point, especially with garbage collection involved. If you store them c-style as integers, then how do you dereference them safely, or what if you add 50 to a reference etc. If they're stored as opaque references then that's an entire new system added on to wasm.

All of this could be done in theory, but I don't see it happening with WebAssembly. WebAssembly is a deliberately conservative and simple project. It was partly a reaction to the more aggressive NaCl project which had threads and its own low-level API pepper etc. WebAssembly is moving at the glacial pace typical of browser consensus, and the stakeholders involved don't actually want it to take over the web ecosystem (Apple doesn't want web apps to be as good as App Store apps; Google doesn't want opaque web apps that can't be easily injected with ads and analytics; Mozilla doesn't want the browser to be a simple, commodity VM).


Direct DOM access is a pretty high priority for the Wasm folks; that said, the consistent message of WebAssembly since forever has been augment, not replace.

You can find the proposal here: https://github.com/WebAssembly/host-bindings/blob/master/pro...


Yes, we first need to define an ABI. But that can be quite simple. Passing floats, for example could be done by passing the bytes that define the IEEE float. For strings, you could pass UTF-8. Etc.


Besides what WebAssembly offers as general purpose VM, in an ideal world TypeScript would take JavaScript's place.


Why would it do that? Javascript is more appropriate in the early stages of a project. You don't want to go to the trouble of declaring types everywhere only to realize everything has to change. Once you stabilize you can add in typescript, which is the great thing about it.


It is precisely when you want to change everything that types are useful, because the compiler will helpfully point you to all cases that you forgot to change.


No matter what stage of the project, you're implicitly writing code to a specific type whether you know it or not. Typescript just makes types explicit, and makes clear all the places you've made type assumptions. So if you need to change types later in the project, you can do so with confidence knowing that you've handled all the cases you made type assumptions.


Some of us only see a value in dynamic typing for shell scripts, burned too many times in large code bases.


Wouldn’t a UI in WebAssembly make it impossible to block ads? The recent QT announcement renders to Canvas doesn’t it?

I thought the idea was to mostly use WebAssembly for parts that are performance critical, not replace all of it?


This has always been possible if you just write sufficiently obfuscated JavaScript. However a UI that is entirely rendered in a canvas element would basically be unusable by anyone with a disability, so that technique is useless for big orgs.


The cynic in me can't help but expect most web companies to care more about pushing unblockable ads down everyone's throats than about the disabled.


That’s why decent countries have regulation to make them care about things other than short-term profit.


It is not correct, but most big orgs don't care about those details unless they are legally enforced by the government.

I am yet to have worked in any web project where stuff like ARIA was even on the requirements list.


Unless they spent a _ton_ of effort re-implementing the DOM/native browser UI in canvas, it'd probably be basically unusable by anyone without a disability too.


What stopping anyone from adding accessibility features to single element canvas apps?

A UI framework developed exclusively for canvas-apps could probably add many such features.


It would also be impossible to inject ads into an existing wasm ui that didn't have them.


> Wouldn’t a UI in WebAssembly make it impossible to block ads?

In the future machine learning will be applied to detect and disable ads visually.


I also cant wait for binary blob websites.


I'm all for this, but I'll hold my breath until I see a responsive GUI emerge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: