What I'd like to see is a memory safe JS interpreter (no JIT) - yes, it will be slow, but 90% of websites don't need JS go that fast, but JIT opens up a security can of worms.
So if I go to Google docs, I can toggle spidermonkey (or whatever Firefox's JS compiler is called nowadays), but if I go to $randomwebsite, I'll get a secure and usable web.
Not very the parts that make it NP hard are allowing libraries to specify maximum versions (and other more complex version ranges). Most of the time libraries use minimum constraints (~) or (^) which allows the heuristic to work like go's algorithm. For rust, node, and other languages libraries can be imported twice as different versions (without requiring a major version renaming like go) this also allows the heuristic to have an out: if it reaches a really complex case it can just give you both versions. Beyond that package management is a barely disguised 3-SAT solver which we have good, fast solvers. There are definitely some edge cases, but when's the last time you ran any of the following package managers and worried about dependency solve speed? cargo, apt-get, npm (and yarn), dnf, zypper. IO far and away dominates the profiles of these programs, solver speed is basically a non-issue in practice.
It does. There are ways to mark a package as "only once" in the dep graph. For instance, C libraries are required to be marked in this way.
The only once constraint also has a nice out for the SAT solver, if you reach a conflict or something that can't be solved cheaply you just make the user select a version that may not be compatible with the constraints. Bower, dep, and maven work that way.
You anticipated where I was going, which is mutable state + multiple copies of packages seems like a recipe for trouble.
So, I'm not sure how happy I would be as a user if my package installer bailed out and asked me to choose!
Out of curiosity, how do you mark your package as "only once" in cargo? I tried googling, and didn't find an answer, but did find a bug where people couldn't build because they ended up depending on two different versions of C libraries!
It does make wonder if MVS will solve real pain in practice. :-)
> So, I'm not sure how happy I would be as a user if my package installer bailed out and asked me to choose!
Its definitely not a great UX, but at the end of the day the problem can only be solved at the language level or by package authors choosing new names. For instance in java you can't import 2 major versions of a package. Solving for minor versions having to bail out has been incredibly rare in my experience. I only see it when there's "true" incompatibilities, e.g.
foo: ^1.5.0
bar: foo (<= 1.5)
> Out of curiosity, how do you mark your package as "only once" in cargo? I tried googling, and didn't find an answer, but did find a bug where people couldn't build because they ended up depending on two different versions of C libraries!
I think its the `links = ""` flag. It may only work for linking against C libraries at the moment, but cargo understands it!
> It does make wonder if MVS will solve real pain in practice. :-)
Not by itself, the semantic import versioning is the solution to the major version problem, by giving major versions of package different names. Go packages aren't allowed to blacklist versions, though your top level module is. This just means that package authors are going to have to communicate incompatible versions out of band, and that the go tool may pick logically incompatible versions with no signal to the user beyond (hopefully) broken tests!
> Go packages aren't allowed to blacklist versions, though your top level module is. This just means that package authors are going to have to communicate incompatible versions out of band, and that the go tool may pick logically incompatible versions with no signal to the user beyond (hopefully) broken tests!
Yeah, it seems if the Go system ends up not working out in practice, this will be why.
But because of the minimal nature of MVS, you won't run into this problem unless something else is compelling you to go to the broken version. And by the time that's happening, you'd hope that the bug would've been reported and fixed in the original library.
It'll be interesting to see how it plays out in practice.
(Also, if I have a library A that depends on B, which is being reluctant or slow about fixing some bug, I can always take the nuclear option and just fork B and then depend on that. Basically the explicit version of what Cargo would do through deciding it couldn't resolve w/o creating two versions. But I think the incentives might be set up right that the easy/happy path will end up getting taken in practice.)
Packages are made up of modules, and modules can have global state. But doing so directly is unsafe, specifically because it can introduce a data race. Rust also does not have “life before main”, so it doesn’t get used in the same way as languages that do. I’m not sure if Go does?
(I replied but the reply vanished. If it reappears, apologies for the dup.)
Yeah, go has a magic function `func init()` which gets called before main. (You can actually have as many init's as you want, and they all get called.)
Probably evil, though so far it hasn't hurt me in the same way as, e.g., c++ constructors have. Maybe because it's more explicit and thus you're less likely to use it in practice.
Are you sure? While its open source, your phone won't be (since the OS is MIT licensed, your OEM won't have to give you anything). Can you build a custom ROM without kernel sources?
This doesn’t seem to have anything to do with the software, though. You can easily lock down linux for a given phone if that’s your goal (at face value anyway).
I appreciate what you are saying. So many people assume that open source gives them the keys of the castle, while later they realize that a simple broadcom closed source driver puts the whole openess in the trash.
I've been happily buying Nexus and Pixel phones for as long as they've existed, on which you can easily unlock the bootloader and build your own ROM if you want. I don't see why Google would stop doing that just because they've released a new OS.
If you're concerned about installing your own software on your own hardware, why not buy it from a manufacturer that protects your freedom to be able to do so?
Yes, but they are still shipping AOSP and still selling Pixel phones with unlockable bootloaders.
Clearly, Google wants to be able to support companies which want to sell locked-down phones.
But as long as they sell their own phones with unlockable bootloaders, you can vote with your wallet and buy those phones.
I am concerned about the reduction in use of the GPL, but what I care about is being able to run and modify the software on my phone, and as long as Google is still releasing AOSP (and the Fuchsia open source project), and selling phones that are unlocked, I'm happy enough with that.
Yes, they might change their policy in the future. But for now there's no indication that they intend to do that; they've always been good at selling their own phones and Pixelbooks with unlockable bootloaders that does allow you to replace the ROM if you want.
I (not having looked at the architecture yet) have a suspicion that Fuchsia will run on a hardware-independent layer, like Treble which they're pushing right now.
>Personally I think we’ll see Red Hat getting acquired in the next 5 years,
It would be interesting to see what would happen with linux after that, as they are one of the biggest (if not the biggest) donors to Linux, and while you'll still find Enterprise Linux for the Server (whether it's them or some successor), they are also responsible for things like Linux on the Desktop and FOSS in general, so projects which don't directly make money like Cygwin, gtk, gnome, gcc, and PulseAudio may be in danger.
Meh. Linux itself doesn’t need Red Hat anymore. Upstream got good enough that you just don’t need to stay on proprietary backports for years and years just to keep your service reliable and secure. These days most of Red Hat’s value add on Linux is to keep deprecated features alive for their large slow-moving customers. Sure they also contribute cool bleeding edge work, but nothing that couldn’t be picked up in a heartbeat by a team at Oracle, Google, Alibaba, or a hundred other systems companies.
As for those satellite projects you mention, in my experience the dependency on Red Hat is a self-fulfilling prophecy. Their employees tend to close ranks and crowd out other contributors in the projects they sponsor. If RH disappeared tomorrow, for many of those projects the result would be more diverse contributors, with a healthier mix of ideas and priorities. It might help shock the Linux community out of the cultural rut it’s been stuck in.
It seems you're ignoring the vast amount of work Red Hat contributes as free software. See e.g. the amount of work put into the kernel: https://lwn.net/Articles/742672/. They've been increasing the amount of contributors while they've been expanding. Currently it's a pretty big company, so quite a huge amount of contributions. Further, any company they buy they tend to make the software free software.
You're being dismissive without any substance IMO.
I’m not dismissing the volume of quality of their contributions. I just don’t think they are so critically needed that Linux and its satellite projects couldn’t quickly recover if they stopped contributing (which was the GP’s question).
I'd agree with you that RH is creating a closed ecosystem and many of the ideas in RH land are not in the best tradition of open source software and not good for a
healthy community.
But...
Slow moving can be another way to say 'proven' and who is out there deciding what should be deprecated if it isn't the big companies like RH? Is that the cultural 'rut' you refer to? That things don't move fast enough? If that is it I disagree. Most of the 'innovation' I've seen in software is wrapping old ideas for a new generation.
I completely agree that sometimes slowing down the pace of upgrades is the responsible thing to do, especially on mission-critical systems. But Red Hat is not the most authoritative or trustworthy source of information on that topic, because 1) they don’t actually build and operate enterprise systems themselves, their customers do; 2) they have an incentive to make their slow-moving proprietary forks look more useful than they actually are, 3) they have a track record of trying to make upstream less reliable and secure than it actually is, again with the goal of making their offering seem more needed.
My comment on “cultural rut” was unrelated. I was referring to the lack of diversity in the open-source community, and the difficulty in moving past the myths and closed-club mentality of 1960s US academia. Open-source is still primarily the playground of privileged, insecure, passive-agressive white males cargo-culting the behavior of their predecessors, but it could be so much more.
Sounds good to me. I've love to see Gnome and Gtk die off or at least become much less popular. There's much better technologies out there which are getting passed over because of RH's dominance here.
> Linux on the Desktop and FOSS in general, so projects which don't directly make money like Cygwin, gtk, gnome, gcc, and PulseAudio may be in danger.
People really should talk about this a lot more: Red Hat fund so much of the development of the basic components that the actual desktop environments rely on, as well as their support of GNOME: they have people working on everything from Wi-Fi and Bluetooth to power management to graphics (Noveau driver, Wayland, Pipewire etc.) and audio. No other company seems to have the desire or the resources to fund these essentials at the level that Red Hat has done for many years.
Of those satellite projects, the only ones about which I'd be worried are GTK and GNOME. GCC is already supported by the FSF (and if the FSF can't do it, then I guess that's one more reason to start migrating toward LLVM/clang), Cygwin isn't as big a deal anymore (MSYS2 can pick up the torch, and Windows Subsystem for Linux helps, too), and PulseAudio - while certainly better than it was a few years ago - is not the end-all-be-all of sound systems (sndio, for example, is way more pleasant IMO, and is now available for non-OpenBSD systems - Linux included).
Wait, what? Is it not the GNU Compiler Collection? It's part of GNU, which is a FSF project. Not sure about actual developers, but Richard Stallman himself is still on GCC's steering committee last I checked (among various other individuals, including multiple from Red Hat). The donation link on GCC's homepage also points to the FSF's general GNU donation page, which strongly implies the FSF is the one controlling the project's finances, too.
Regardless, that's even less reason to be worried about Red Hat totally collapsing, then. Plenty of other companies - large and small - to pick up the slack (and I highly suspect the various Red Hat contributors would probably continue to contribute anyway).
What happened there? In the 80s they were the next big thing. Then they finished their OS project (when the Linux kernel became available they finally had a FLOSS OS). But they never took off with their high priority projects. Is it that they can't find programmers? Is it that they pay so little that you might as well write your own code and host it on Git*.com?
What do they rewrite? To the best of my knowledge, they have no problem using BSD licensed software, they just won't develop BSD license software[1] (mostly, though they do approve it here and there for new projects).
[1]. More accurately, they won't approve it for new projects. But the projects are fine.
Why is gnome still the de-facto default DM? Is it just that gnome 2 was great, so gnome 3 (despite practically a new project with a new goal) inherited that position? And not just RedHat. I would expect Debian to ship with Cinnamon or xfce as the default.
Gtk became "the default" because Qt had some dual licensing weirdness until qt3/qt4 (I didn't pay that much attention so don't quote me) and by that time several fundamental projects like Firefox, gimp(of course) and OpenOffice (koffice took until like ~09 to get production ready) had established gtk as the primary Linux toolkit
Gnome just follows on because the gtk team is a very purple Venn diagram with the gnome team (at least for paid developers)
Unfortunately, I kind of feel torn: On one hand, I love flutter as a concept[1]. It removes a lot of the baroque from mobile development (no need for the whole RecyclerViewer construction for every list! No Fragments! Doesn't require Android Studio to start working!) but I hate the language (dart). After using Kotlin (let alone Rust), It feels like moving back to Java (no non-nullable types) and in some ways it's actually worse than Java (threads are extremely heavy weight, every interaction with native code is async and slow).
I wish there was:
1. A good Kotlin language server (unlikely, as the company behind the language is an IDE company).
2. A good reactive Kotlin GUI library (meaning, with the ergonomics of Flutter (as in, I don't have to work with Fragments and their lifecycles, just use views)).
3. A good JVM interpreter (to speed up Kotlin/Java compilation)
[1]. No I don't use Flutter to "write two apps for the price of one", as most (of my) apps are just a front-end over a back-end server, so they don't have any special logic worth saving between iOS and Android. I just find Flutter easier to work with relative to Android.
I'm on the Dart team (though I wouldn't necessarily take my comment to be an official statement of the entire team).
> (no non-nullable types)
I really wanted [1] to get those into Dart 1 (way back before Swift and TypeScript even existed), but I couldn't convince language team at the time that it was worthwhile.
When we moved to a stricter, sound type system with strong mode, we hoped to get non-nullable types into that and ship them with Dart 2. But, as you can imagine, migrating millions of lines of code from an optionally typed, unsound type system to a sound, reified static type system is a hell of a lot of work. (I'm not aware of any cases where it's been done at this scale.)
We weren't able to fit non-nullable types into that schedule and into our users' migration pain tolerance. There is only so much you can drag them through, and just getting to strong mode was a lot.
There is still a desire to bring non-nullable types to Dart. It probably won't be soon because we want to give our users a break from migration, and give our implementation teams time to take advantage of the new type system. But I haven't given up on them, and our team's new focus on static ahead-of-time compilation makes them more important than ever.
I agree that Kotlin is a really nice language. I hope we can catch up to them with Dart and exceed them in areas.
The thing is that the longer you wait, the harder it will be to migrate, as the codebase size grows (and once you leave beta, people assume that the language is finalized and won't be too happy being forced to refactor their code when Dart 3 comes out). But as it is, I doubt that the nice parts of Kotlin will ever make it to Dart.
> The thing is that the longer you wait, the harder it will be to migrate.
Definitely preaching to the choir on that one. I pressed my case as hard as I could before 1.0.
We do still have more freedom to make breaking changes than many post 1.0 languages do because, frankly, we don't have that many users. But every day, the cost to make that change goes up.
> people assume that the language is finalized and won't be too happy being forced to refactor their code when Dart 3 comes out
I was worried about that with the transition to the new type system in Dart 2, but users — internal and external — were surprisingly accepting of the breakage. We don't want to be cavalier about breaking them, of course, but my impression is that there is more room for significant changes than I'd initially assumed.
> But as it is, I doubt that the nice parts of Kotlin will ever make it to Dart.
Dart will never be Kotlin, but I hope we can get to a point where most users don't consider it to be deficient compared to Kotlin and where we have some features to make Kotlin users jealous.
It's because the language was made to compile to JS, and competition to JS, so appropriate tradeoffs were made.
For example:
1. JS is single-threaded, with extremely heavy threads only recently made available. So Dart is single threaded, with extremely heavy threads available.
2. JS is weak-typed, so Dart was made optional typed. Remember, it was made before typescript, so they probably didn't expect that such type-heavy features as ADT would interest people.
Kotlin does the same (for example, internal immutability would be much nicer, but since the JVM doesn't support that, neither does Kotlin).
1. They added optional nullability syntax to objective-c, a language that assumes everything is null and doesn't crash when you send a message to nil. If your file uses nullability, it's required for the file, otherwise you don't have to use it. Code without nullability annotations using non-null annotated code could interface with the non-nullable code. Errors didn't come up unless you did very obvious things like directly put nil in a non-null argument.
2. They migrated their entire apple library base to have proper nullability annotations. Code that didn't have nullability annotations continued to work fine. Or I think you could just turn off the build error with a compiler flag.
3. Apple introduced swift. Any objective-c code that didn't have nullability defined was assumed to be an implicitly unwrapped non-null. So if you give a null to something not expecting a nil in swift, it would crash.
I liked this approach for the most part. Migration wasn't that much of a pain, and it was incentivized because I didn't want implictly unwrapped stuff in my swift code.
If you migrated the flutter framework and the dart stdlib to have nullability, then new projects can start with proper nullability annotations from the start, while old projects can progressively migrate files as needed.
Java also has a nullable syntax annotation, and kotlin probably has some sort of java-kotlin nullability interop somewhere. I would really suggest doing it, nullability has given big stability benefits to large projects as it is, and would help adoption in the future as people using something like kotlin & swift for new projects vs flutter itself.
not sure what's dart like relative to enums, but enum with associated type + optionnals are the absolute killer feature for swift. Steal those and you'll have nothing to worry about.
As far as I know, both Hack and Flow are unsound and don't do any runtime type checks to preserve soundness. It's a lot easier to migrate dynamic code to a static type system if you have the luxury of just ignoring the type system when you want to. :)
In Dart 2, the type system is sound and checked at runtime in cases where it can be proven statically safe (downcasts, variance, etc.). That makes it a lot more work to migrate because the code actually needs to run correctly without violating any of the dynamic type tests.
Neither Hack nor Flow are reified, so it's a different matter.
Both of these languages started out with strict null-checking — Hack because there were too many bizarre falsy values in PHP to allow otherwise, and Flow because it worked out so well in Hack. So neither had to tack on strict null-checking afterwards.
There have been some similar large-scale migration efforts. For example, Hack's record types were built on top of PHP arrays, which don't distinguish between absent and null values. Consequently, Hack's record types didn't distinguish between optional and nullable fields. Furthermore, records support width subtyping by default. This is unsound: you can construct distinct types A and B with A <: B <: A.
After a lot of effort, we did migrate the entire codebase to resolve this unsoundness. Essentially, we changed every record declaration to be an "open" record type and made all new records "closed" by default (referring to whether they supported width subtyping).
The analogue, then, would be something like refactoring every type declaration in every consumer's Dart code to be annotated as nullable, adding explicit null-checks, and allowing them to write non-nullable types as the default thenceforth.
The story of how I ended up on the team is probably too random to be actionable. I believe we do have open headcount, but I don't know how the hiring process works for getting a new hire on to a particular team.
Sad to see optional typing go. I thought that's the best of both worlds -- you can write type annotations for documentation or checking, but you also have the benefit of dynamic typing when static typing comes in the way.
I have a different view on it, I actually like Dart. I like the strong types and how the language is more suited for enterprise development than both typescript and JavaScript.
There is the added benefit of being able to reuse assets from your mobile app in AngularDart, which I suspect may see an increase in use along with flutter.
I’m not coming from Kotlin or rust though, but a .net core backend with JavaScript on the front end and Xamarin for apps. I can definitely see us move from AngularJS and Xamarin to flutter and AngularDart though as this move would be a nice improvement.
I’m not too worried about the lack of available projects and libraries on dart compared to say JS, as we typically wouldn’t include some one man hobby project anyway.
That being said, I think we’ll wait and see what happens throughout 2018.
>is more suited for enterprise development than both typescript and JavaScript.
What makes you think this given the library and tooling situation? Is this strictly a "worse is better" thing where lacking modern language features is a plus (at which point I'd argue for Java) or is there a Dart feature that makes you think it's actually an improvement over TypeScript, Kotlin, .NET Core (both C# and F#) or Rust (just to avoid adding more languages to what you mentioned) for enterprise development?
I'm frankly astonished at seeing how something that I see as an extremely clear step backwards as far as languages go is getting so much traction; not sure if it's just because Flutter is that good a tool (in which case I'd think we should be aiming to make the native core available to other languages) that people are willing to root for the whole package or if there's something else.
I think you misunderstood me. I don’t think it’s an improvement over c# (or Kotlin, but I have no experience with Kotlin so I wasn’t commenting on that).
I think it’s an improvement over JavaScript, Typescript and the Xamarin experience + tooling, with the added advantage of letting you share language in your clients.
I don’t think dart is great though, it’s just better than the terrible alternatives.
TypeScript has ADTs, non-nullable types and the tooling is one of the best (great autocomplete, support for refactorings, incremental compilation, yarn is a great package manager, etc).
Its type system is also one of the most advanced in mainstream languages, being inferior only to Scala and Haskell.
In comparison, Dart doesn't have ADTs or non-nullable types. For a new language, I consider this very underwhelming.
I'm considering using Flutter for a new project because it seems to be a great platform, but having to use Dart instead of TypeScript is a step backwards.
Here are my personal opinions as a long time Dart/JS/TS developer.
- Dart had a lot of features before either JS or TS. Important features like cancellable promises or optional chaining are still missing (I know they might be coming soon). Also some nice quality of life features like named constructors.
- As a superset of JS, Typescript has a lot of idiosyncrasies that might bother people. Personally I don't mind, but people new to web development often find Dart to be a friendlier experience, with fewer pitfalls.
- To me, and this is very subjective, the TS syntax is uglier than either JS or Dart. The types get in the way and add noise to the code. I find Dart's and pretty much any other language's type declarations much cleaner.
However, I have two very big problems with Dart that prevent me from using it as much as I'd like to.
- JS interop is much much cleaner in Typescript/Flow. Every compiled-to-JS language that isn't a superset suffers from this. It's what prevents me from using ReasonML seriously too.
- No support for JSX. I can't go back to writing nested createElement after using JSX
I thought reasonml's JavaScript interoperability was quite good, seems to compile faster and the type system cannot lie about it's inferences like it can in TS
But the main thing for me between TS and reason is exhaustive pattern matching is better in reason, it feels jacky in TS
could I use reasonml/js with flutter widgets and ignore dart?
Or languages that compile to JS (like TypeScript). Maybe one day we'll get languages compiling to Dart[1].
[1]. Although it'll be a pity. JS is stuck as a compilation target because it's a standard, and it's old, and even then wasm may one day take over JS. Flutter could have been a library, and one could have written in Go or Rust or Java with Flutter bindings. As it is now, I don't know if its possible.
Extensions is also my #1 most missed language feature! Wasn't possible before when Dart was dynamically typed, but perhaps now with Dart 2's strong mode we'll see it in future.
>As the oxidize projects continue to land this email will become more and more obsolete. There's simply no way for C++ to compete with Rust,
Not everything is fixable with rust. JS, for one, is JIT compiled, so Rust wouldn't help much there (it's one of the reasons it's not being oxidized).
"Also, Rust’s memory safety only applies to code written in Rust, but a JIT compiler also generates machine code and then jumps to it. That generated code does not benefit from rustc’s static analysis."[1]
So if I go to Google docs, I can toggle spidermonkey (or whatever Firefox's JS compiler is called nowadays), but if I go to $randomwebsite, I'll get a secure and usable web.