Hacker News new | past | comments | ask | show | jobs | submit login
Bosque Programming Language (github.com/microsoft)
199 points by kermatt on May 14, 2020 | hide | past | favorite | 174 comments



The language doesn't seem like the interesting part here. Bosque appears mainly to be a vehicle for exploring a different way of thinking about compiler IR. From the MSR site[1]:

> In the Bosque project we ask the question of what happens if the IR is designed explicitly to support the rich needs of automated code reasoning, IDE tooling, etc. With this novel IR first perspective we are exploring a new way to think about and build a language intermediate representation and tools that utilize it. Our initial experiments show that this empowers a range of next-generation experiences including symbolic-testing, enhanced fuzzing, soft-realtime compilation with stable GC support, API auto-marshaling, and more!

[1] https://www.microsoft.com/en-us/research/project/bosque-prog...


I know that Microsoft are all about grabbing as much developer mindshare as possible these days, but does nobody remember Steve Yegge and is Grok system at Google? It's basically this, just ten years ago. Having the IR not throw away anything so that the tooling can have a richer interaction. I guess the best way to be successful is to bury your sources.


The Roslyn C# compiler also doesn't throw anything away - you can recreate the source from the AST, completely with white-space and comments. This is necessary to support safe refactorings, so I don't think this is a novel idea. IntelliJ must have supported something like this for ages.

Bosque sounds like something beyond Roslyn, but it is not clear to me what.


Grok seems to have been open-sourced as Kythe, and I suspect it was significantly nerfed like other things they've released (e.g. bazel)

Additionally, it looks like Kythe was never fully stabilized, and there's been no widespread usage of it. It's shovelware.

I don't think you should blame MS for not citing an internal tool that they never properly released.


The snark and meanness in this HN thread is totally uncalled for and why people don't publish code. I can't believe this thread is on the front page together with https://news.ycombinator.com/item?id=23157783 . Go read that, especially the ending comments on open source.

> I made this thing that I thought was cool and I gave it away, and what I got back were occasionally friendly people who nicely requested things from me, but more often angry users who demanded things of me, and rarely anyone saying thanks.


The issue here is the link; it doesn't do a great job of explaining _why_ Bosque is breakthrough. A bold claim like that needs evidence.

What's special here? From glancing at the code snippets--which is really all we have in the way of evidence on that page--I see an interesting language concept, but it's not entirely here how that relates to the initial claim.

There are definitely some interesting elements, though. It looks like it borrows a lot from TypeScript and C#, then attempts to make the resulting syntax a bit stricter with various forms of contracts. I think the goal is to make stronger guarantees at compile-time and promote better design practices. That, in turn, can result in faster applications because the compiler can make more assumptions.

If that is the goal, it needs to be explained more clearly in the readme before receiving widespread publicity, and there needs to be text explaining how each feature achieves that goal. A big part of developing a new programming languages is selling it: you have to convince a lot of opinionated people that it's worth their time and energy to invest in learning your technology.

Ideally, these arguments would be written during the design process before implementation even begins, so I don't think it's unreasonable to expect that they be available in the readme.

This might be a better link for the HN post: https://www.microsoft.com/en-us/research/project/bosque-prog...


This is a much better link. Bold vague buzzwords on the GitHub page ("a breakthrough research project", "a high productivity development experience", "an unprecedented tooling ecosystem") inflict antagonistic mood towards the whole project.


Gets me pretty excited though. MS has a track record of productive dev experience and great tooling ecosystems. I’m actually looking forward playing with this!


> see an interesting language concept, but it's not entirely here how that relates to the initial claim.

I didn't see any concept which wouldn't be practiced in PL design for decades. Yeah, they may not be widely known, but suggesting that "structural, nominal, and union" types are anywhere near any kind of "breakthrough" is just too much. If anything, the language seems conservative in its design rather than novel.

EDIT: actually, this looks like a much better link, in that there's at least some meat to chew on: https://github.com/microsoft/BosqueLanguage/blob/master/docs...


Right, I don't see the justification for "breakthrough" on that particular page. It's interesting in that it's attempting to bring more performance to the JS world using old tricks, but from that page alone, I wouldn't call it breakthrough. I'd need to see more concrete justification for that term. It's possible that the compiler itself is truly groundbreaking, but that isn't really addressed in the readme.


I actually went through the language doc, which describes the language in more detail (there are still some WIPs and TODOs). I was able to name at least one language which would be considered "prior art" for every one of the features detailed there. A good deal of languages also come close to supporting all of them in one package, and languages that offer most of the features are a dime a dozen.

There are a few mentions here and there which do look interesting, but whatever the "breakthrough" turns out to be - and maybe there really is one - it won't be in programming language design for sure.


This can almost be said for many languages, even the popular ones like Rust which didn’t event most of the things it does.

Who cares? It’s all about how they put it all together. Plus some of the IR stuff is interesting.

Basque strikes me as a better attempt at a new Typescript than Reason. I really like that they are adding things like check, assert, validate into the language itself. I mean it just looks far better than TS in every way, especially once it starts to optimize all the language information it has. And it’s totally fine it’s not new. What matters more is that it puts them all together in the right way, moves at a good pace, gains traction.

I’m curious the interop story. Performance stats. DX. Look at TS vs Flow. I tried both numerous times and it was just so obvious TS was leagues ahead - far easier to set up, work with existing code, nicer errors (early on), and they just relentlessly improved it while Flow was crickets.

I don’t care so much about it being a breakthrough (though I can see how it’s basically as big a breakthrough as Rust, if it’s done right), but that it becomes viable. Sure, Reason or any of the other hundred I’ve seen come and go over the last few years would be cool too, but for whatever reason they aren’t “catching” fully. Reasons interop is finicky, the language is very verbose, and it’s just been way too quiet (where are the big releases?), but they got most of the rest right. This seems to get as much if not more right, so we’ll see if MS pushes it like they do TS, I hope so!


> I really like that they are adding things like check, assert, validate into the language itself.

I'm not sure. Contracts are supported in some other languages as a library feature; recent examples I encountered were Elixir, Scala, Clojure, and Haxe; there's also, of course, Racket, but there it's hard to say where the library ends, and language begins... Eiffel, Cobra, Pyret have contracts as a language feature - it may be easier to support it that way instead of first making the language expressive enough and then writing that library.

> What matters more is that it puts them all together in the right way, moves at a good pace, gains traction.

No, out of these only the last one matters: there is no "good" pace (you're either moving too fast or too slow), and there is no "right" way to design a language. The only thing that matters for a language is to gain traction - otherwise, it dies - and that seems to be almost entirely unrelated to the technical features of the language. Luck (e.g., C, Python, PHP) and money (e.g., Java, C#, Go) seem to play a much greater role in making a language successful, along with vendor lock-in (e.g., Objective C, Java, C#).

The lag between an innovative feature conception and its implementation (in the mainstream language(s)) seems to be 10-20 years on average, but it can sometimes take 50+ years while a feature is repeatedly discovered and forgotten in 10-20 years cycle. For example, there are optional static type systems (for otherwise dynamic languages) in languages from the '80s (ie. Common Lisp) and '90s (ie. Dylan), and the more formal gradual typing is IIRC from 2001 (by J. Siek). TypeScript, Flow, Python's mypy, and similar solutions are a mainstream-ish emanation of these ideas - it them took some 15 years to materialize and a few more to appear on the "working programmer" radar. This is a very common pattern: basically, if you want to know how programming will look like in a decade, you can simply look at the experimental features and ideas from 10+ years ago.


So basically I pointed out a novel feature that is great and you refuted it with.. that it exists a library? Well, yea. That’s not really a point.

Having it built into the language is the feature - it means compile time optimization and much better vertical integration for errors, analysis.

Also there is absolutely a good pace, you admitted it! I didn’t say “fast” note, hah.

Finally, yes. I know all the trends, etc. Not sure what your point is, it doesn’t refute anything I said.


Thanks for the link, it is interesting. I’ve wished for the "Typed Strings" feature in C# a number of times, including today, so I hope it will be ported there one day.


Stronger assumptions at compile time allows for higher quality code, invariants are amazing.


> invariants are amazing

They are, as are pre- and post-conditions; they're not, however, a breakthrough in language design by any means.

https://en.wikipedia.org/wiki/Eiffel_(programming_language)


>Stronger assumptions at compile time allows for higher quality code

this depends very heavily on your definition of higher quality code. I really hate this new trend in language discussions of declaring subjective benefits to be objective.

Stronger compile-time assumptions, by definition, reduce the scope of runtime dynamism, which, for many people is an important feature of say dynamic languages.

Stronger assumptions at compile time are part of a trend of attempting to verify programs formally ahead of time, which is not the only way to produce robust software, but seems to be increasingly treated as such.


What can you do with runtime dynamism that I can’t do with a language using a strong type system with dependant types? You need to know _something_ about the dynamic data else you can’t do anything with it and that starts to suggest there’s a structure you can declare, right?

And what methods do you think are as capable as formal verification? What I mean by that includes formal specification, property based testing, theorem provers etc.

Edit: This reads argumentative but that’s not what I wanted, I’m genuinely interested.


This isn't just some person's hobby project; this is from Microsoft Research. It's a slightly different standard.


Not that you don't have a point in general, but the snark and meanness in this HN thread is totally called for.

> github.com/microsoft/BosqueLanguage

We may not know where the fishhook (or razor blade) is, but after fourty years of experience, we know there is one.


Yeah, the good ol' MIT-license fishhook.


That's a bit vague; do you mean MIT-licence-then-patent-troll, or MIT-licence-then-charge-for-security-updates, or MIT-licence-then-lots-of-'optional'-extras, or ...?


I think they were being sarcastic, pointing out that MIT licenses make poor fish-hooks. TypeScript wasn't fish-hooked, why would this be?


> is a breakthrough research project

[citation needed]

In all seriousness, each time when I see a project self-describing itself as a breakthrough, it is a red flag. No matter if it is programming, mathematics, arts, or anything. (Of course, unless used ironically.)

Either, it is a huge ego and a total lack of self-skepticism (vide Stephen Wolfram's recent Theory of Everything) or fiddling of the marketing department.


They are probably overselling it but I just scrolled straight past that to evaluate the feature list, and I definitely like some of the things in it: nominal data types with invariants, validated strings, pre/post-conditions, familiar TypeScript-like syntax etc.


Cool. Could you maybe just read ahead and evaluate the claim for yourself and then post a comment about that if you end up agreeing/disagreeing with it?


I read through the whole readme. I can’t figure out which part of this they think is the “breakthrough”.


I just read this as "within the context of MS research projects — maybe even specifically in the category of programming languages — Bosque has a relatively high breakthrough value".

But I might be wrong here.


If you have worked with F* or Low* then you probably have some exposure to using Z3 and getting this kind lexical analysis from it is a breakthrough. I just wish Z3 was available on more platforms by default.


"Bosque simultaneously supports a high productivity development experience [...] with a performance profile similar to a native C++ application.[...] Bosque also brings an unprecedented tooling ecosystem including zero-effort verification, symbolic testing, dependency management validation, time-travel debugging, and more."

If this is not breakthrough, what is?


At this point any new systems language aiming at productivity has to prove itself not just superior to C++, but superior to Rust, without being significantly worse in any aspect. It’s already suspicious by virtue of having a hand-wavy "Int" type (what size/signedness is that?) and it appears to be object-oriented (so we have to rely on compiler optimisations to remove dynamic dispatch) and garbage-collected (so by default any non-trivial type is heap-allocated). These are just ways in which it is worse than Rust as far as performance goes, since "ergonomics"/"productivity" is so subjective. It seems to improve on C++ only by taking the most common C++ patterns and building them into the language so the language doesn’t have to be so immensely complicated, while also improving the syntax. That’s simply not enough for a modern language to be competitive.


> It’s already suspicious by virtue of having a hand-wavy "Int" type (what size/signedness is that?) and it appears to be object-oriented (so we have to rely on compiler optimisations to remove dynamic dispatch) and garbage-collected (so by default any non-trivial type is heap-allocated). These are just ways in which it is worse than Rust as far as performance goes, since "ergonomics"/"productivity" is so subjective.

While I agree that this language doesn't seem to be differentiated enough to compete, I disagree with your apparent premise that new languages need to be as fast as Rust. I personally would welcome a new language that gives me full-spectrum dependent types with great tooling and moderate performance. There are many aspects to programming languages beyond raw speed.

The world has enough cookie cutter procedural and OOP languages. I'd love to see a new language from a different paradigm succeed.


Have a look at Nim: https://nim-lang.org/

It's a system language that's focused on readability and performance. It has OOP but isn't focused on it, and has some of the best AST metaprogamming out there built in as a core principle, so it's easy to extend the language. Strong static typing with type inferrence, specific type for garbage collecting (ref type) - everything else is on the stack by default, or you can manually manage memory.

Looks a bit like Python, compiles to C, C++, ObjectC, Javascript and experimentally to LLVM. Good support for Windows, Linux and Mac (and anything you can target a C compiler for). Performance matches equivilent code in C, C++, and Rust. Programs compile to stand alone exes making them easy to distribute. Compilation is very fast.

If I only have one thing to say about it, my personal experience has been that Nim makes programming more fun by being really low friction; it just gets out of your way, yet runs really fast. It's great for scripting out a prototype for something, but because of the high performance that prototype can be expanded into a full product. It also helps that you can write server and client code in the same language too.


The GP asked for a single key features: dependent type. Nim doesn't have them and never will, why bring it in the discussion ?

Sadly, “Look at Nim” seems to be the new “rewrite it in Rust”…


You're right no dependent types, though to be fair that wasn't the only thing mentioned, and none of the other replies have yet suggested a language with dependent types either.

I was responding to:

> ...great tooling and moderate performance. There are many aspects to programming languages beyond raw speed. The world has enough cookie cutter procedural and OOP languages. I'd love to see a new language from a different paradigm succeed.

Nim's paradigm is fairly open (no small thanks to metaprogramming and unified function call syntax), and drops a lot of the baggage from the usual class (ahem) of OOP languages. There's loads of mainstream languages that focus entirely on OOP and I really resonate with wanting to explore different approaches to creating solutions, as I think OOP tends to colour how a language approaches problems.

Seems a bit sudden to jump from my posting a reply to this as the same vein as "rewrite it in Rust".

In terms of languages with existing dependent type implementations, it looks like the main options would be ATS, Agda, F*, or Idris. Some of these are pretty far away from the OOP paradigm too.

Also:

> Nim doesn't have them and never will

https://github.com/nim-lang/RFCs/issues/172


> In terms of languages with existing dependent type implementations, it looks like the main options would be ATS, Agda, F, or Idris. Some of these are pretty far away from the OOP paradigm too.

This* is an OK response to the original question.

> Seems a bit sudden to jump from my posting a reply to this as the same vein as "rewrite it in Rust".

The thing is: 90% of comments talking about Nim comes from people like you, whose entire comment history is over 90% about Nim, and most of the time, it comes in context where it's borderline irrelevant to the subject.

Aggressive proselytism like this has hurt Rust a lot, and it's definitely going to hurt Nim as well if you aren't careful.


My comment history is 90% talking about Nim because this is the account I talk about Nim on :) Probably it's the same for other people. My last comment was 7 months ago on the 1.0 release.

It seems like when talking about a smaller language you have to walk that fine line between putting an experience of using them out there, and being a PR ambassador. I'm really not into that, but I guess that's the reality.


For ‘moderate performance’ surely JVM based languages are what you’re looking for? There’s great tooling and a very low barrier to creating new languages.

Creating a new systems programming language like C++, Rust or Zig is by contrast a lot more effort and means having significantly worse support for debugging and IDEs unless you put a lot of effort in (generating good DWARF debug data for a new language is hugely complex).


> For ‘moderate performance’ surely JVM based languages are what you’re looking for? There’s great tooling and a very low barrier to creating new languages.

Not sure I understand what you're suggesting. I was asking for a language with dependent types (or anything that isn't just another procedural/OOP language). Such a language could use any runtime, whether it be the JVM or anything else.


I’m working on a language that will hopefully meet both of your criteria, so at least you can take encouragement that you are not alone.

I’m working on a language based around the recent work of Pfenning, Reed, and Pruiksma (Adjoint Logic) and Krishnaswami’s Dependent/Linear research (both of which go back to Nick Benton’s ‘94 work). It is definitely not OOP, it is a compositional language (a lot like the concatenative language family) and is rooted in explicit parallel and sequential composition. With one of the adjoint logics being the type theory implementation of Intuitionistic Logic (Martin-Lof Dependent Type Theory).

There are people working on things all over the non-OOP and the advanced static types spectrums, don’t loss faith in progress yet. I have plans to release the 0.1 website and ‘compiler’ before July 1. Of course it is going to be a bumpy road, but I’m having a great time working this project.


Presumably LLVM closes the gap significantly?


> I personally would welcome a new language that gives me full-spectrum dependent types with great tooling and moderate performance.

I'd propose to have a look at Swift. It is very similar to Rust in many aspects (particularly the type system), with slower performance (due to some of the abstractions). The tooling on macOS is already really good, on Linux it is getting there, and on Windows the next release will add official support.


I don't see how Swift is similar to Rust -- it's a garbage collected, OO language, not really appropriate for low level / systems level programming. It's a nice looking language for what it is, with some nicer modern features in its type system etc, but its niche is not the same as Rust.


> it's a garbage collected, OO language, not really appropriate for low level / systems level programming

I'm of the impression that Swift is reference counted, which, while technically a kind of GC, is also appropriate for low level / systems programming (which isn't to say that Swift is a good language for low level / systems programming; only that its memory management isn't the disqualifying factor).


I wouldn't consider reference counted GC systems level appropriate. It _can_ be more deterministic, but not when it's tracing and collecting cycles (which a decent modern RC implementation must do) and it usually plays havoc on L1 cache (by touching counts).

You can make RC quite nice (I've written cycle collecting implementations before) but there's reasons why C++ programmers are encouraged generally to avoid shared_ptr and use unique_ptr whenever possible: harder to reason about or trace object lifetimes, and performance implications.

Now if the garbage collection was optional, and one could still freely heap or stack allocate and free, then, yes, I could see it. But I don't think that's the case w/ Swift, at least not last time I looked at it. It's also why Go is imho not a 'systems' programming language.


> I wouldn't consider reference counted GC systems level appropriate. It _can_ be more deterministic, but not when it's tracing and collecting cycles (which a decent modern RC implementation must do)

Swift has what you would call an ‘indecent’ RC design, then, because it doesn’t.

> and it usually plays havoc on L1 cache (by touching counts).

Swift (like Objective-C) can store the refcounts either inline or in supplementary structures, for performance.

> Now if the garbage collection was optional

In the common case, it practically is, as (most) value types will only ever live on the stack, or inline in another data structure.


imho inline will still mess with cache somewhat as adding a reference count still often requires bringing something into cache or evicting something from cache that wouldn't happen with a pure pointer reference.

Glad to hear that value types are optimized well.


It is for Apple, where the long term roadmap is for Swift to become the systems programming languages of their platforms.


Swift is just C++ with rubberized corners. Distinctly “meh” as a language in its own right, embarrassingly knotty around its ObjC bridging (there’s a basic impedance mismatch between those two worlds), and certainly doesn’t have anything as powerful as dependent types.


> embarrassingly knotty around its ObjC bridging (there’s a basic impedance mismatch between those two worlds)

I think they've done an incredible job with their ObjC interop, given said mismatch.

But you're right — the person above who said that

> The world has enough cookie cutter procedural and OOP languages.

definitely isn't looking for Swift.


“I think they've done an incredible job with their ObjC interop, given said mismatch.”

Which is to say, the Swift devs have done an incredible job of solving the wrong problem.

Apple needed a modern language for faster, easier Cocoa development. What they got was one that actually devalues Apple’s 30-year Cocoa investment by treating it as a second-class citizen. Gobsmacking hubris!

Swift was a pet project of Lattner’s while he was working on LLVM that got picked up by Apple management and repurposed to do a job it wasn’t designed for.

Swift should’ve stayed as Lattner’s pet project, and the team directed to build an “Objective-C 3.0”, with the total freedom to break traditional C compatibility in favor of compile-time safety, type inference, decent error handling, and eliminating C’s various baked-in syntactic and semantic mistakes. Leave C compatibility entirely to ObjC 2.0, and half the usability problems Swift has immediately go away. The result—a modern dynamic language that feels like a scripting language while running like a compiled one, which treats Cocoa as a first-class citizen, not as a strap-on.

(Bonus if it also acts as an easy upgrade path for existing C code. “Safe-C” has been tried before with the likes of Cyclone and Fortress, but Apple might’ve actually made it work.)

Tony Hoare called NULL a billion-dollar mistake. Swift is easily a 10-million-man-hour mistake and counting. For a company that once prided itself for its perfectly-polished cutting-edge products, #SwiftLang is so very staid and awkward-fitting.


I don't have enough experience with Swift to agree or disagree with you about it, but it strikes me that there were already Smalltalk-like languages out there that fit this niche somewhat -- such as F-Script -- and Apple could have gone down that road instead of shoehorning Swift into the role.

Objective C already had Smalltalk-style message dispatch syntax, and something fairly close to Smalltalk blocks/lambdas. So it's not like existing Cocoa programmers would have been frustrated or confused.

Clearly the original NeXT engineers were inspired by Smalltalk and wanted something like it, but had performance concerns etc, perhaps there would have been performance concerns with moving to a VM based environment for mobile devices, but I think with a modern JIT these problems could be alleviated. As we've seen with V8, etc..

So I think it was actually a missed opportunity for Smalltalk to finally have its day in the sun :-)


Agreed. Swift is a language designed by and for compiler engineers; and it shows. Contrast Smalltalk which was designed by and for users and usability. Chalk and cheese at every level—especially user level.

Alas, I think decades of C has trained programmers to expect and accept lots and lots of manual drudgework and unhelpful flakiness; worse, it’s selected for the type of programmer who actively enjoys that sort of mindless makework and brittleness. Busyness vs productivity; minutiae vs expressivity. Casual complexity vs rigorous parsimony.

Call me awkward, but I firmly believe good language design means good UI/UX design. Languages exist for humans, not hardware, after all. Yet the UX of mainstream languages today is less than stellar.

(Me, I came into programming through automation so unreasonably expect the machines to do crapwork for me.)

Re. JIT, I’m absolutely fine with baking down to machine code when you already know what hardware you’re targeting. (x86 is just another level of interpretation.) So I don’t think that was it; it was just that Lattner &co were C++ fans and users, so shaped the language to please themselves. Add right time, right place, and riding high on (deserved) reputation for LLVM work. Had they been Smalltalk or Lisp fans we might’ve gotten something more like Dylan instead. While I can use Swift just fine (and certainly prefer it over C++), that I would have enjoyed. Ah well.


I mean, I work in C++ all day (chromecast code base @ Google), and I like the language. But I also know where it does and doesn't belong. For application development, particularly _third party_ app dev, it makes no sense. And neither did Objective C, which is the worst of both worlds. I had to work in it for a while and it's awful.

I agree Dylan (or something like it) would have been a good choice, except that it wouldn't mate well with the Smalltalk style keyword/selector arguments in Cocoa, also it has the taint of being associated with the non-Jobs years and so maybe there would have been ... political... arguments against it.

They just needed a Smalltalk dialect with Algolish/Cish syntactic sugar to calm people down, and optional or mandatory static typing to making tooling work better.


But it doesn’t have dependent types does it?


Nope. I think it periodically gets floated on Swift-Evolution, but someone would have to design and implement it… and I suspect the Swift codebase (which is C++; it isn’t even self-hosting) is already fearsomely complex as it is.

Or, to borrow another Tony Hoare quote:

“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”

..

Alas, big complex codebases facilitate fiddling with the details over substantive changes; and that’s even before considering if the Swift language’s already-set syntax and semantics are amenable to expressing concepts they weren’t originally designed for. Stuff like Swift evo’s current thrash over trailing block[s] syntax reminds me of Python’s growth problems (e.g. its famously frustrating statement vs expression distinction that’s given rise to all its `lambda…`, `…if…else…`, etc nonsense).

It’s hard to scale after you’ve painted yourself into a corner; alas, the temptation is to start digging instead. I really wish Alan Kay had pursued his Nile/Gezira work further. That looked like a really promising approach to scalability. We really need better languages to write our languages in.


> has to prove itself not just superior to C++, but superior to Rust

... and Zig and Nim and D, I guess? None of them are seeing enough usage to even score them meaningfully against C++. The incumbents are C/C++, and there's no one else within two orders of magnitude. Just experiments at various stages; some are still in the lab, others have started some field trials, but that's about it.


As I'm writing this, I spent X hours trying to find where my C project was leaking memory. Turns out one of the openssl pointers needed to be freed explicitly, which was my fault from having just seen their docs and them not explicitly showing so.

Point is, with Rust this wouldn't be a thing. I wouldn't have to compile my program with a number of clang flags and then run the sanitizers and try to fish out where this could possibly be happening. That is just 1 clear obvious productivity win for Rust.

Have you written any recent C/C++ and have used/played with Rust?


If the library were written in modern C++, then it also wouldn't be a problem. It would have given you a std::unique_ptr, ownership would have been clear, and deletion would have been handled automatically.


Yes, and Rust doesn't protect you from memory leaks, BTW, although it does make them less likely. The overall value of a language can only be evaluated after years and many projects. My personal favorite to replace C/C++ is, by far, Zig, but I can't claim that it's the one to beat because it's years away from proving its worth, as are Nim, Rust, and, well, Bosque, I guess. Fashion forums like HN can pass judgment quickly -- that's what fashion forums are for, and why they're good entertainment but not to be taken too seriously -- but the real world of the software industry takes much, much longer, and has a far higher bar for evidence.


Let me clarify, in this actual case it would have. In Rust, memory that gets allotted in a function are freed when they go out of scope. So function returns -> stuff gets freed unless explicitly telling compiler not to.

I haven't seen Zig and I'll check it out. But some of the "fanfare" is necessary to get people involved and things built. Many other langs and projects that are technically worthwhile never get any of it and just languish.


I don't have a problem with the fanfare, but let's not drink our own kool-aid, yeah?

If there's a new language that wants to try its luck, it still only needs to beat the incumbent, not the rest of the wannabes (one or some of which may well one day be the incumbent, but none are anywhere near that yet).


Indeed, at the end of the day there are certain domains where they are unavoidable, and regardless of countless rants from our side, those are the tools that get picked when one of said domains needs to be addressed.

Given your line of work, why not Java itself, on an hypothetical future where Valhalla is done, and AOT is a standard feature in equal footing with JIT capabilities/performance?


Maybe, but I'm allowed to like more than one language, no? :)

But seriously, I don't think AOT can ever match a JIT on peak performance without reducing the abstraction level and significantly increasing programmer burden, and much of Java's design is around "we automatically turn RAM into speed." It's a great value proposition for huge swathes of the software industry, but I don't think it's necessarily the best strategy for niches that require performance and are RAM-constrained.

I like Java and I like low-level programming even though it requires more effort regardless of language.


Sure, just curious. :)

Currently I am on a mix of .NET (C# F#) alongside C++.

I agree with the performance part, that is why the best is to be able to have both around, AOT + JIT.


You can't really compare Rust to Zig & Nim anymore. 2016 is over, Rust is deployed in the wild on millions of computers (Dropbox, VSCode, and obviously Firefox), there are dozens of Rust packages in Debian apt repositories and it's being used by many companies (Microsoft, Amazon, Facebook, Google, etc. even Nike!).

Of course it's incomparable to C and C++, and it will never replace them, but comparing it to Nim and Zig makes no sense either.


The difference between 0.000001% and 0.001% is much less important than between 0.001% and 20%. Rust has less penetration than Haskell and Erlang, and regardless of what you think of their merits, influential as they may be, none of them are factors in the software ecosystem. Languages with 10x more use and 10x more exposure than Rust disappeared with little impact over a very short period of time. Of course, Rust could become a serious force one day, as could Zig or Nim.


> The difference between 0.000001% and 0.001% is much less important than between 0.001% and 20%.

Even if your math is pretty much arbitrary, there is the big difference between having libraries and not having them, and having jobs or not having them.

> none of them are factors in the software ecosystem

It will probably disappoint you, but C and C++ aren't really “relevant factors in the software ecosystem” anymore, and they haven't been for the past two decades. It's been PHP, Java, JavaScript, and some C#, all over for the past 20 years. And Rust won't change anything in that hierarchy even in the best case scenario, neither will Zig or Nim. And that's completely fine.

Yet, Rust has reached a significant existence, which means you can build stuff without having to build all the necessary libs by yourself, you can hire people knowing it, or even people who don't know it yet but will become proficient reasonably quickly because there are tons of learning material. You can find a job at some company already using it or use it for an internal project at your current company because you can show your manager that this isn't too much of a risk. All that even if you don't live in the bay area but in Europe. None of it was possible in 2016 for Rust, and none of it is in 2020 for Nim (and Zig isn't even stable, so it's more comparable to what Rust was in 2013 or something). If everything goes well for them, it could become the case by 2025, but not today. (First, they really need to bring more core contributors to the language, as they are still mainly developed by a single lead: at this point Andrew Kelley still has authored 50% o fall Zig commits, while Andreas Rumpf is responsible for one third of Nim's. The bus/burnout factor of both languages are still pretty much 1).


Rust has a real existence that's somewhere in the vicinity of that of Elixir, Haskell, Ada, and Delphi. It's definitely alive, but it's not "the one to beat" or a major factor, even in the systems programming space, which is what I was responding to.


Then you misunderstood the comment you were responding, because it claimed Rust was the one to beat in terms of design, never in terms of market share.

I someone wanted to release a new functional programming language, it would be totally legitimate to ask how it compare with Haskell, and complain that it's a poor reinvention of standard ML, even if Haskell only has a tiny market share.


Then you misunderstood my comment because what I said was that because of its market share and age, it isn't the one to beat in terms of design, because we have no sensible way to judge how good its design is, even in comparison to other languages. The sample size is just too small, and the experiment too young.


Yes, that's what you said, and to make your point, you compared Rust to two super new languages that are still mostly developed by a single person and not being used anyware.

Then you rectified yourself, and more accurately compared Rust with Haskell and other languages. At this point you already gave up on your initial argument, because no-one would consider Haskell too small to be a proper benchmark of a new functional programming language.

Of course, as this is an internet argument, you're not going to recognize it. Now, since we are now circling back to the initial claims, I don't think this conversation is worth pursuing. Have a nice (and safe) day.


I think most people would think Haskell is too small and that we have enough experience with it to judge the merits of its design.


... and Pascal !

Many languages are superior to C/C++

C/C++ just wins from the amount of available libraries and high-quality implementations.


I don't understand what people are trying to categorize when they say C/C++. They're incredibly different languages. It's hard to find similarities between C and modern C++ (C++2x flavor). If we put C and C++ under the same umbrella, why not put Java there too? I just think when you say C/C++ you need to clarify what exactly do you mean by that.


C/C++ => C and C++, simple plain English grammar simplification.

Used all over the place in ISO C++ papers, official documentation from all compiler vendors, long gone famous magazines like 'The C/C++ Users Journal', reference books well respected in the C++ community,....

Yet a couple of people still insist into making a point out of it, but don't start one when Java/C, Java/C++, C#/C++, Python/C,..., gets written somewhere.


Maybe I'm too much of a pragmatist, but to me, the amount of available libraries and the quality of the implementations is part of what "superior" means.

You may think that Pascal is superior on paper (that is, just the language specification). If we were programming on paper, that might matter. But we aren't.


It's very task-dependent.

If your needs are towards "indie software" that needs a snappy native UI and can make use of subprocesses to cover major dependencies, the Pascal options are really good and time-tested. It's probably underused for games in particular, in fact. Fast iteration times for game projects are a great selling point for contemporary use of Pascal, if you can justify the investment in engine code.

If you are writing server backends, there is nothing interesting going on in Pascal and you will be scraping around to find the bindings to do what you need. Likewise with a number of other common industry tasks.


> It’s already suspicious by virtue of having a hand-wavy "Int" type (what size/signedness is that?)

That is a common misunderstanding by people who haven’t spent time with high level languages.

In any language more precise numeric types allows for faster arithmetic operations if used appropriately. This is the primary reason Java is still a few times faster than JavaScript when comparing application benchmarks focusing on arithmetic and why those performance differences drop considerably when comparing non-arithmetic operations.

The lower level the language is, closer to the metal, the more important these performance cases matter. This is also acceptable from a design perspective because you need to also be worried about memory management, pointer arithmetic, type conversion, and various other low level concerns anyways.

The whole point of a high level language is to not worry about those things. The compiler/interpreter does that heavy lifting. For example Java and C# are both garbage collected so you, by default, don’t get a say in memory management. If you wanted that control then just use C++.

A major pain point in Java is conversion of numeric types. JavaScript only has one numeric type, that is really shitty in all respects, and writing numeric operations in JavaScript is so much cleaner and more fluid in the code.


This is correct, and applies (and is helpful) to many cases.

But the moment you need to shift an "Int", or do bitwise operations or transmit it in a network packet (think about endianness, etc.), is the moment you may regret that the compiler is doing too much heavy-lifting for you.


If it actually is a unlimited-width integer, bit logic/shifting can work fine, and network packets have to be x&0xFF,x>>8&0xFF,etc anyway. But that's painful for direct translation to machine code (what happens when I try to return 340'282'366'920'938'463'463'374'607'431'768'211'456 from a function?), so either it's secretly a fixed-width integer (and they're being deliberately vague about what width, which is suspicious), or the language defaults to possibly allocating memory (and possibly getting a OOM error) on every single arithmetic operation, which is a catastrophically undesirable property in a systems programming language.


A better C++ could be an attractive proposition. There are precedents of succession languages which improve on existing ones, like eg. Coffeescript, Kotlin (at least at the beginning), Reason.


Have you used Rust? It doesn’t have backwards compatibility with C++, but the semantic model really is very similar to modern C++, just without hundreds of the footguns.


Arguably garbage collection is a huge boon to productivity though. I agree about the first two, but I think the whole memory allocation debate is too contextual to be an issue in the general case. Big projects tend to have customized memory allocators which make most benchmarks usefulness dubious - that and reference counting can be nondeterministic too (you deallocate an object triggering a huge chain of frees).


Even a cascading deallocation is still deterministic. It’s true that it doesn’t have hard latency guarantees, though (and true GC with hard latency guarantees is arguably more useful in many cases).


> It’s already suspicious by virtue of having a hand-wavy "Int" type (what size/signedness is that?)

This is a silly criticism. The language is brand new; they haven't gotten around to ironing out low-risk minutia like naming numeric types. Anyway, "Int" can still be well-specified even if the name doesn't indicate size (the signedness is just as clear as with Rust and C++).

> it appears to be object-oriented (so we have to rely on compiler optimisations to remove dynamic dispatch)

Not a fan of OOP or C++, but implicit dynamic dispatch isn't a property of OOP as C++ demonstrates. And to that end, Bosque seems to copy C++ in this regard, or at least the code snippets show methods annotated with "virtual".


Hi, project owner here, great to see this on HN and always happy to hear from folks here and on the GitHub repo.

We also have a webinar with Q&A scheduled for Thursday morning (https://note.microsoft.com/MSR-Webinar-Programming-Languages...) which may be of interest as well.


The README is full of buzz words, and yet does not explain at all how these supposedly amazing "breakthroughs" have been achieved.

The code samples look a lot like Swift, with some C++/Rust/Scala/general ML sprinkled in, so I can't see anything special here directly.

Especially relating to the promise of being as easy as Typescript but as efficient as C++/Rust/etc.

If there are great ideas in here, I'm happy to hear about them, but they should be at least be mentioned and referenced clearly.


Can you explain some further things about this languages?

1. How does the GC work? It says "novel reference counting" does that mean it leaks cycles or handles them (either by also tracing or preventing them statically)?

2. Is that the only thing it does to provide a C++-like "resource efficient and predictable runtime"? After all, that's basically Swift (or Python+static types). I think the main improvement that C++ (and C# and Go) have over languages like Java is ability to avoid heap-allocated objects (i.e. stack-allocated structs).

3. it looks like, but it's not entirely clear, that the compiler checks preconditions at compile time - so e.g. I shouldn't be able to call `divide(a, b)` without proving that `b != 0` - is this the correct interpretation? How do you handle mutability and/or concurrency, if at all?


Hi and thanks for the questions:

1. By design the language provides some novel memory invariants including, no cycles in the object graph, no old-to-new pointers, and no pointer updates. Thus, we don't need to worry about cycle collection, can greatly reduce the number of Ref-Count operations, and can (later) employ pool allocation more consistently.

2. Bosque also supports by-value types (including the future ability to do by-value unions) and, since the language is referentially transparent, the compiler can aggressively use stack allocation and copy semantics. Also, the collections are all fully determinized and use low-variance implementations to avoid "bad luck" performance anomalies.

3. The compiler does not enforce the checks. They can either be checked at runtime or checked by converting the program into a logical form that Z3 or another theorem prover can check. Values in the language are immutable and there is no concurrency (yet) but since the language is immutable concurrency is by definition data-race free.


Great to see this point in the design space explored! Looking forward where it goes.


Actually that was already a thing in Mesa/Cedar, Modula-3, Oberon language family and Eiffel.

Sadly Java has not taken this into account, nor AOT support out of the box, and now it is catching up with it.


Eiffel does not enforce preconditions at compile time. What are you referring to?


I was only referring to:

" Is that the only thing it does to provide a C++-like "resource efficient and predictable runtime"? After all, that's basically Swift (or Python+static types). I think the main improvement that C++ (and C# and Go) have over languages like Java is ability to avoid heap-allocated objects (i.e. stack-allocated structs)."


The use use of '=' here scares me:

   function add2(x: Int, y: Int): Int {
       return x + y;
   }
   
   add2(2, 3)     //5
   add2(x=2, y=3) //5
   add2(y=2, 5)   //7
The language already supports '=' operator for assignment of variables in the current scope, so should you use the same operator for denoting value assignment formal parameters in a function call? This can lead to a lot of confusion between variables in the scope and formal parameter names in a function that is called from the current scope.


Python, one of the most popular languages, uses this syntax, so I'm guessing the vast majority of people don't find it scary (I certainly don't).


This seems like a non issue to me and as tomp said, is already done in popular languages without problems. The third example (keyword arguments before positional) does seem a bit odd though, as interleaving positional and keyword arguments seems like a recipe for confusion, but using = for keyword arguments doesn’t seem like a problem to me.


This and a few other parts seem to be inspired by StandardML / Ocaml

https://learnxinyminutes.com/docs/standard-ml/


I love this feature - calling named parameters is (for me) a glaring omission from Javascript (and surprisingly Typescript), I know you can define an object argument, but not many do and that’s not very elegant.


>From this perspective the natural choice for the Bosque language is to adopt a pure functional model with immutable data only.

>The Bosque language fuses functional programming with block scopes and {...} braces by allowing multiple assignments to updatable variables var

This seems like a contradiction. There should only be an if-expression if 0.1 is correct. Not a branch


Judging from the code snippets, the language supports generic types (`List<T>`). Are there plans to support Higher kinded types? Smth like `typedef Ev<F extends Generic, E>=F<E>`.


Yeah, but what is it?

> The Bosque programming language is a breakthrough research project from Microsoft Research.

Tells me nothing useful about it.

> Bosque simultaneously supports a high productivity development experience expected by modern cloud developers, coming from say a TypeScript/Node stack,

It's a web language then?

> while also providing a resource efficient and predictable runtime with a performance profile similar to a native C++ application

Or a back-end dev language?

Come on, what is it for?


Does a language really have to aim for a specific use case?


Of course. A language isn't just syntax, it prescribes how and at what level you're solving problems. System programing languages typically expose a lot of the underlying memory management, so you can optimize it. Scripting languages provide high-level primities, specifically so you don't have to think about memory management.


Yes, for the generic case we have like 6 other languages that have better support, better libraries, better ecosystems, ...


It has to have at leasat one use case, otherwise it is litearlly useless. At least a use case like "Python but with static types" or "Swift but for Windows".


Yes.


It would be good for its own sake to provide some motivation for why you'd bother using it over the dozen established alternatives with mature ecosystems and tooling.


It's a language research project.

> In the Bosque project we ask the question of what happens if the IR is designed explicitly to support the rich needs of automated code reasoning, IDE tooling, etc. With this novel IR first perspective we are exploring a new way to think about and build a language intermediate representation and tools that utilize it. Our initial experiments show that this empowers a range of next-generation experiences including symbolic-testing, enhanced fuzzing, soft-realtime compilation with stable GC support, API auto-marshaling, and more!


Then tell us this. In fact stick it somewhere obvious.

Also please talk straight, please don't mention "next-generation experiences", that's just marketing nothingspeak to HN techies.

"enhanced fuzzing" - as opposed to what?

"soft-realtime compilation" - meaning...?

"stable GC support" - oh thank god for that, finally GC is reliable and can be used in production. thankyou thankyou thankyou for banishing GC instability.


> Then tell us this. In fact stick it somewhere obvious.

The first sentence of this page:

> The Bosque programming language is a breakthrough research project from Microsoft Research.

"Microsoft Research" is in italics, even.


And I quoted exactly that line. In my first post.


Why are you asking unrelated questions then? The goal is not to motivate you to use it, especially not you who seem to be uninterested in both the goals and the ways the team has proposed. On the other hand I like the direction - but I am not going to use it on a daily basis either, maybe in a few years.


The base question was, what is it for? That is the ultimate purpose of this language. That's not an 'unrelated' question.

> The goal is not to motivate you to use it, especially not you who seem to be uninterested in both the goals and the ways the team has proposed

I knew neither the goals or nor the ways of this team. Tell me and as a guy interested in languages, I will be interested.

> I like the direction

So what is the direction?

All I'm asking for is information on this project, clearly given and not wrapped up in idiot marketing drivel ("stable GC support" FFS)


It sounds like MS Rust.


Microsoft Research is also working on Verona, which seems closer to Rust than this language, to me anyway.


> As a result of these design choices there is always a single unique and canonical result for any Bosque program. This means that developers will never see intermittent production failures or flaky unit-tests!

So Bosque programs are not allowed to receive input from the outside world?

> When an error occurs in deployed mode the runtime simply aborts, resets, and re-runs the execution in debug mode to compute the precise error!

Again, no interaction with the outside world? Or are all inputs recorded during the entire execution?

It doesn't seem like this could work as specified in many real-world scenarios...

EDIT:

> This compiles recursive functions into a CPS form that uses constant stack space, eliminating any possible Out-of-Stack issues

So recursion cannot depend on input from the outside world?

Either 1. I'm missing something, 2. Bosque programs indeed cannot receive input from the outside world, or 3. these claims seem a bit exaggerated...


The Bosque paper says the following:

> JavaScript took an interesting step by decoupling the core compute language in the JS specification from the IO and event loop which are provided by the host [...]. We take the same approach of decoupling the core compute language, BOSQUE, from the host runtime which is responsible for managing environmental interaction.

So it seems that Bosque doesn't do IO directly, but instead specifies how a given input is mapped to some output.


From what you're quoting and from what you're saying, it seems that Bosque (the core compute language) can indeed receive input from the outside world, even though it comes from the runtime (which Bosque considers the outside world).

Which means that some of the claims I quoted don't seem entirely accurate, and the other one only seems possible if the input (and possibly some state) is recorded...


> Again, no interaction with the outside world? Or are all inputs recorded during the entire execution?

Inputs would need to be cached (eek, if there’s a lot of state loaded from external services or databases, or if that state is sensitive and you want to control where it may get persisted), but more importantly, output needs to be not output on the rerun (eg database writes and api calls).

> So recursion cannot depend on input from the outside world?

I’ve made a few mini languages (mostly for fun or for myself, but also external DSLs for end users) and I’m a big fan of not allowing unbounded loops. (Although bounded by a dynamic value like an input or the length of a runtime list is ok, so not quite the same thing as described here, it just needs to know the number of iterations before the loop begins) It makes it much easier, in my opinion, to prevent bugs, but of course it does mean you give up expressive power, so while it works for DSLs it’s not great for general purpose languages. I’m also a big fan of synchronous programming languages. I made one that basically runs inside a database transaction and reruns on conflict.


Sure, but they are not simply saying that they do not allow unbounded loops, that is what some languages do already, like Coq.

They are saying that the compiled program "uses constant stack space, eliminating any possible out-of-stack issues".

This seems impossible unless recursion cannot depend in any way on inputs from the outside world.

Mind you, even if you have constant stack space that does not mean that you won't run into an out-of-stack issue.

You can still run out of stack space if it is constant, because it might not fit in memory. It would probably be trivial to construct such a program.

Furthermore, a program with constant stack space may even run out of stack space during execution (due to the kernel overcommitting memory, for example).


There is a difference between "running out of stack memory" and "running out of memory used to store stack-structured data".

Any recursive algorithm can be mechanically transformed into an equivalent iterative algorithm that uses only a single activation record on the system call stack. If you perform that transformation on an entire program, then the total depth of the call stack is a static property of the call graph in the source code, and it does not depend in any way on run-time inputs from the outside world. The only way to run out of stack space then would be to write a program whose static, constant stack space requirements are indeed too large for the machine it is run on.

Transformed algorithms may, of course, require "manually" tracking data in a stack structure, and use of such a data structure may indeed result of running out of memory depending on what inputs are provided, but that's just "running out of memory, which we happened to be using to hold a stack-like data structure", not "running out of stack".


To me it just seems that you're arguing semantics...

I mean, what is the difference between "stack memory" and "heap memory allocated to store the stack", other than the fact that the OS preallocates the first one for you by default?

To me, "the stack" and "a stack-like data structure stored in the heap that is used as a stack to replace what would otherwise be the stack" is functionally the same thing...

You know that (given appropriate conditions) you can just allocate stack frames wherever you want and point your stack pointer to whatever you want, whenever you want, right?

This is what the Go compiler does to prevent Go programs from running out of the stack space provided by the OS by default. But AFAIK they don't advertise that Go programs use constant stack space and you don't have to worry about out-of-stack issues...

So as far as I understand, there is no transformation that would make any recursive program use "constant stack space", unless of course, you just use some other memory to store what you'd store on your stack (but then again, that doesn't mean your stack has constant space, it just means that you're not using the default OS-provided stack space as a stack, and you're using other memory as a stack instead).

Apart from that, as I said before, even if you had "constant stack space" it does not mean that you won't have out-of-stack issues.


    To me it just seems that you're arguing semantics...
The most important type of argument to have. If you don't agree on semantics, no other discussion is worth having, as you will never know if the other party actually agrees with you or not.

    I mean, what is the difference between "stack memory" and "heap memory allocated to store the stack", other than the fact that the OS preallocates the first one for you by default?
That is a pretty major difference with wide-ranging effects on program performance right there.

    To me, "the stack" and "a stack-like data structure stored in the heap that is used as a stack to replace what would otherwise be the stack" is functionally the same thing...
System-level error messages and memory managers both disagree.

    You know that (given appropriate conditions) you can just allocate stack frames wherever you want and point your stack pointer to whatever you want, whenever you want, right?
Yes. And once you do, that memory becomes functionally privileged from the perspective of your allocation strategy.

    This is what the Go compiler does to prevent Go programs from running out of the stack space provided by the OS by default. But AFAIK they don't advertise that Go programs use constant stack space and you don't have to worry about out-of-stack issues...
Of course not. Because they actually use that space as stack space, to store function activation records.


> That is a pretty major difference with wide-ranging effects on program performance right there.

It really isn't. The only difference is that in the second case you have to call mmap() at the beginning of your program to allocate stack space and then change the stack pointer to point to it, the rest of the execution would remain the same.

> That is a pretty major difference with wide-ranging effects on program performance right there.

> And once you do, that memory becomes functionally privileged from the perspective of your allocation strategy.

So you're saying that the transformation you mentioned gives you better error messages at the expense of performance, because now instead of using the stack, you have to keep allocating a stack-like structure on the heap?

If so, that's not entirely true, as you can have the same error messages without doing this transformation (that's why some kernels and runtimes can detect stack overflow and even resize the stack), so the only observable functional difference is that you replace a simple stack-like allocation mechanism with one that is slower without much benefit.

And it would still be a bit disingenuous to say that the transformation would make your program use constant stack space, as if replacing stack space with heap space somehow makes stack usage on a recursive function irrelevant...


Absolutely, there is a big difference between compile time statically known stack size and bounded loops. Hell, loops can be bounded but dynamically sized. If they, indeed, mean that the stack is known ahead of execution, then you are absolutely right, as long as you can have recursion that depends on user input, the stack cannot be known statically.


>> This compiles recursive functions into a CPS form that uses constant stack space, eliminating any possible Out-of-Stack issues

> So recursion cannot depend on input from the outside world?

Why would CPS prevent recursive code from doing I/O or using its results?


I like it already. But the page has a mix of statements ending with ';' and not. Which is it? Pick one and stick to it ;), it triggers my ocd more than naming interfaces 'concepts' and classes 'entity' for what seems like the heck of it. It does look swiftish and ceesharpy. But hey, if I can get something that looks like that, compile down to a small binary as fast as C++, I'll call your interface 'concepts' all day.


I agree about the arbitrary renaming just to make the language syntactically a bit distinct from other languages, while messing around the user. It really is not a big issue but the fact they chose to make it an issue at all by picking a new name for an existing concept is worrying.


This is pretty cool. This project has a long way to go before it's mature, but as someone who uses Typescript every day professionally, the promise of the ease and safety of Typescript coupled with the speed of C++ is really compelling.


You can already play with something like that, Static TypeScript, used on MakeCode for targeting embedded devices.

https://www.microsoft.com/en-us/research/publication/static-...

Microsoft MakeCode: from C++ to TypeScript and Blockly (and Back)

https://www.youtube.com/watch?v=tGhhV2kfJ-w

https://www.microsoft.com/en-us/makecode


This is cool, thanks for sharing. I'm rather in the same boat as OP.


The core ideas are kind of interesting, but in my experience where cool new language ideas come apart at the seams is when they are built out to completeness with libraries, packaging, configuration management and the rest.

> ... The current focus of the Bosque project is core language design. As a result there is no support for packaging, deployment, lifecycle management, etc. ...

So there is nothing solid there yet and thus significant risk that like many other languages this offering will suffer metastasis, bloat, and conflicts as it grows to be useful for solving real problems.


Oh yeah, I completely agree. To a certain extent it's best to be risk averse with things like this, especially when working on professional problems.

npm has spoiled me, so I doubt I'll ever try to use a language in anything other than a hobby capacity without a very good package manager and corresponding community of functionality.


Looks like Stainless Scala, mixed with Rust, and given a C++ syntax.

This mix looks interesting as such.

But of course Wadler's law applies: Why THIS syntax? Looks like a blast-from-the-past, bloatty and complex. Modern languages go mostly for some more lightweight look-and-feel. Usually more "pythonic". This would be imo also here the better idea.


A lot of punctuation, most of it painful:

  * arrow and double arrow
  * arrow and double colon
  * apparently random semicolons
Not all syntax choices are bad (for example, consistently declaring types after things with a colon as a separator), but certainly not coherent and elegant enough to compete with Python, C/C++/Java, Lisp, etc.


I'm not sure how it supports "modern cloud developers, coming from say a TypeScript/Node stack" with that syntax.


This is another PL I won't try (not that the authors should care).

I'm still waiting for a good "system programming language" that

- has compile time execution that can call a compiler API to allow compile time code generation

- full compile time reflection, optional runtime reflection

- doesn't have any kind shape or form of automatic memory management

- has decent discriminated unions

- is not much more opinionated than C on how I should live my life

- operator overloading: define at least common operators (+, *, -, etc...) for user types. Sometimes I like the idea to be able to define infix/postfix functions in general while also assigning a priority and an associativity (totally possible with a hand made parser), but I think that could easily lead to a mess if miss-used by the community.

Zig is spot on for some of those. JB's Jai on others. For the rest it looks like there aren't many people that are bothered by the things I'm bothered with.


Take a look at Jai. It's not released yet and doesn't cover all your bullet points but it promises speed and getting out of your way, especially if you are a game programmer who loves build pipelines and data oriented design.

https://inductive.no/jai/


I gotta say you lost me at args->allof

It’s 2020, if you’re really serious about building a new language, shouldn’t you at least pick a reasonable naming case convention? (allOf, AllOf, all_of, all-of would all work)

(Not talking about the fact that allOf doesn’t seem very consistent with other languages...)


You are right, allof is inconsistent with the language naming conventions. I was updating some other collections support and made that fix as well. Thanks!


My question is why -> instead of . for method/field access. I see almost all of the examples using -> but a few use . but unless there’s a difference (like how in C++ -> is a pointer dereference access), I think -> just adds noise to the syntax needlessly .

The documentation doesn’t appear to explain when to use one over the other, or perhaps . is a typo.


Agree, the arrow is syntactically redundant in C and C++ and does indeed make code look noisier that it could be. (The same can be said about the “four-dot” double colon.)


all-of should not work unless your syntax for basic arithmetic is something heinous.


Personally, I’m a fan of just requiring whitespace around operators. I find expressions hard to read otherwise, so think it’s bad practice, but I also find precedence rules awkward in all but the simplest expressions, so like to use parentheses a lot, which mean I don’t necessarily need whitespace-based grouping to make expressions clearer, eg: “a/b + c” I would just write as “(a / b) + c” anyway. I’ve been programming for 20 years so it’s not just unfamiliarity, I guess I just found these things cause me speed bumps needlessly.


Refusing to allow symbols in identifiers might make some things easier to read, but is annoying as hell if you're touching anything scientific.

If you have something clear you break tokens on, like whitespace (except within quotes), then you should have no problems allowing arbitrary symbols inside an identifier.


    x = 0
    y = 1
    x+y = 0
    z = x+y
What is z?


x+y = 0 (with that specific syntax) will error in pretty much every language.


So we will refuse to allow symbols in identifiers after all?


The only programming languages where arithmetic symbols are allowed identifiers have an identifier sigil (ala Perl, PHP), require whitespace around arithmetic (some teaching languages), or don't have conventional arithmetic syntax (e.g. Forth, Lisp).


Please read the comment I originally answered to.


Or you're Lisp


Or The Language Formerly Known as Perl6 [1], where it won't cause much friction as variable names ordinarily start with a sigil.

[1] https://docs.raku.org/syntax/identifiers#Ordinary_identifier...


Or you require whitespace in arithmetic.


Or XPath


At first site, it seems like a Swift clone. However, there surely be some reasons that I cannot catch.


preconditions, looks like they try to "prove" them by using Z3 theorem checker


This is laughable - pretty much a buzzword soup.

Whenever I see "this language/tool does A and B well", it turns out it can neither do A nor B well enough.

>The Bosque programming language is a breakthrough research project from Microsoft Research.

Breakthrough according to who?


How does the verification work? I see the Z3 prover in there; is there more detail on how that works with this language?


The design of the language is done in a manner that allows us to translate the code (and all conditions) into very friendly SMTLib code -- that we don't need to add a lot of additional checks like frame rules or havocs and where the set of theories (and their combinations) are well behaved.

We have some info on this in the research papers section and documents but are planning on a full paper focused on how this is done (particularly in the presence of union types, dynamic records/tuples, etc.)


Is the type system at least as rich (or richer) than TypeScript's?

Can I do stuff like `Partial<T>`, `Omit<T, K>` intersection types, etc? Is structural typing the "default"?


I don't believe one can claim to have created a systems language if it doesn't have trivial or seamless integration with the system's C ABI.


Ha, downvoted.

Show me a successful "systems language" without such C ABI compatibility.


The first feature says all values are immutable, but then the very next one mutates a variable. What?


I wonder if the bones of this language came out of the work Microsoft put into Sing# 15 years ago.


cross eyes to the syntax....confused:

-is this a crossover between C and Pascal?


Hyperbole aside perhaps we should tinker with the language first thereby walking a mile in the creator’s shores before passing judgment that its a terrible language.


I don't wanna type function to define a function.


Why did you name your language on "woods"?


In English "bosque" is a specific type of forest/woods ... https://en.m.wikipedia.org/wiki/Bosque, could be named for that too. (Bosque is a generic "woods" in Spanish/Portuguese.)


I stopped reading at "from Microsoft Research."...


Why would someone use this over Kotlin? Seems like Jetbrains has a big head start over this project without much difference in stated objectives


  typedef Letter = /^\w$/;
  typedef Digit = /^\d$/;
Shades of Perl5. Paging jwz!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: