Hacker News new | past | comments | ask | show | jobs | submit login
D is being used for autonomous driving research – Audi and Mercedes (dlang.org)
273 points by pjmlp on June 24, 2019 | hide | past | favorite | 189 comments



I have done an unhealthy amount of comparing around the languages in this space. I'm curious what others think about the tradeoffs between choosing Nim/D/Go/Crystal/C#/Java/Kotlin? Previously I would have not included Rust in that grouping, but I've finally bitten the bullet and started learning it... and now I would, high level abstractions with low level control, etc etc.

Essentially there's a class of programs that are performance sensitive, but also dev time sensitive. How do you choose between the set above? (The answer, if there is one, is probably Python, and then optimize the hotspots, but lets pretend we can indulge).

- Nim: fast dev time; fast performance; not v1, C/C++ interop is second to none. Newruntime seems awesome

- C#: medium dev time; med-fast performance; very OO, coreRT and readToRun are awesome

- D: fast dev time?; fastish performance; I haven't written more than a few lines

- Go: fast dev time; med-fast performance; interop is expensive, lack of high level abstractions

- Crystal: ?; fast?; have not used

- Java: medium dev time; med-fast performance; very OO, GraalVM is cool

- Kotlin: fast dev time; med-fast performance; kotlin native may be cool some day

- Rust: slow/medium dev time; fast; safe, awesome type system

- C: slow dev time; fastest?; unsafe / current tooling and docs are like performing an archeological dig.

- C++: medium/slow dev time; fast; unsafe / full of intricacies

All of the above are obviously opinions gained over time with sources forgotten and I've only written a few hundred lines in each language. PL ADD is real. At the moment, Rust is scratching all the itches though, and I'm excited to see if my mental model of programming adapts to the language and dev speed is not an issue. Obviously I like collecting hammers and am less good at finding nails.

Very cool to see D getting picked up for some serious use. D and Nim are fantastic and hopefully thrive in the coming years. I would love to see a write up of what led them to choose D.


I would say C# dev time is rather fast. I stood up a medium size project recently without ever having used C# in a serious project before and it was pretty fast to set up.

I'd say the main thing hindering dev time would be if you have people you're working with who need things to be done the "enterprise" way (i.e. needlessly extensible and abstracted). But if you don't have those people, it can be much faster to write than Java due to the better tooling and standard libraries.

Also less popular languages might be missing very nice features that make the quality of life of deploying/testing/etc. better, which is really important if you are just setting up a small-medium sized project where you yourself are responsible for the devops. I found C#/.NET (still not exactly sure where one ends and one begins...) very easy to work with in this regard, and I'm sure Java and Go are also probably pretty good here. But I expect C and Rust are not so good.


You might be surprised how good Rust is ops-wise. Unit testing sipport comes out of the box, and it's hard to beat a static binary for deployment (compile times are a pain for CI though).

Rwgarding C#, that language and core tooling is indeed pretty nice. But I was surprised to find the library ecosystem quite lacking in some places. Basic things like JSON (de)serialization seemed pretty clunky, and the long tail of libraries I expect coming from the JS/PHP ecosystems seemed to be pretty non-existent.


> Basic things like JSON (de)serialization seemed pretty clunky

The built-in one is.. strange.

Newtonsoft.JSON is _the_ most downloaded third-party library for .NET: https://www.nuget.org/stats/packages I like it, very flexible, in a good way.


There will be a new JSON API in .NET Core 3.0: https://devblogs.microsoft.com/dotnet/try-the-new-system-tex...


I found it not very flexible compared to Rust's Serde (https://serde.rs/), which is my only comparison in statically typed languages. Customising things like mapping camel case to snake case required creating whole classes.


> Customising things like mapping camel case to snake case required creating whole classes.

The class for that is built-in: https://www.newtonsoft.com/json/help/html/NamingStrategySnak...

The implementation only overrides a single virtual method, ResolvePropertyName https://github.com/JamesNK/Newtonsoft.Json/blob/master/Src/N... so if you want some other convention, it’s very easy to create a custom class for that.

Looks easy enough to me.

I never actually used that part. I prefer to use names from JSON. Or if the JSON schema is really bad, specify JSON names manually with attributes, like this:

    [JsonProperty( "json_property_09123" )]
    public string MagicProperty;
This way searching source code for a JSON name finds C# class serializing that property. Makes working on the code much easier, IMO.


C# is the language and syntax while .NET is the runtime and libraries (or Framework). Compared to Java, Java is the language, the JVM is the runtime, and the JDK are the libraries.


C, D, C++, rust, and nim should all have comparable performance; they all go through gcc|llvm, and all have the same optimizations. Rust and c++ may be slower in debug mode than their other-language counterparts, but the cost of abstractions goes away when optimizations are turned on (in exchange for drastically increased compile times). Java is not significantly more OO than C++; global-scope variables and functions in c++ are analogous to static ones in java. D's interop story is similar to nim. I don't know rust, but I believe it is said to be more expressive than c++.

Honestly, I think all the judgements of dev time are a little bit misleading; iirc there were some studies showing that productivity is not significantly different across languages.

Worth noting, though, is that D and Go compile really quickly.


> Worth noting, though, is that D and Go compile really quickly

In my experience, so does Nim. Yes, it's at two-step process, but the translation to c-code is blazingly fast, and the generated c is of the clean and straightforward kind which gcc or clang can handle in a highly efficient way.


> C, D, C++, rust, and nim should all have comparable performance;

For the same design; what sets fast code apart from slow code is the availability of designs enabled by the language. Compile time computation is a huge win for D (IIRC nim has similar capabilities).


Note that nim compiles to C, c++, objective C, or javascript, and not to llvm IR directly. Just because gcc or llvm are involved in the compilation process doesn't mean nim is as fast as C. cython, for example, compiles python to C but doesn't always match the performance of writing the code in native C.


Nim and cython are not analogous. Most notably, nim is statically typed, and identifiers are statically determinable. I don't say nim is as fast as (those other languages) because it compiles to c, but rather because that's what benchmarks show.


Your comment on dev time is something I intuitively agree with. And there are the future costs of dev time to add new features later / debug code etc. It's a poor measure at the end of they day.

As to C/D/C++/Rust/Nim, what I think is interesting to see, is which ones lead you to the performant path naturally, vs having to really dig into the bowels of the language or go against the grain of the language, so to speak. I have not written enough of any of them to say which do this. I would hazard that Rust does the best job, due to its explicitness / very nature.


> which ones lead you to the performant path naturally

D's big advantage is the plasticity of the code, meaning it's much easier to try out different data structures and algorithms to compare speed. My experience with C and C++ is they're hard to change data structures, meaning one tends to stick with the initial design.

For a smallish example, in C one uses s.f when s is a value of a struct, and s->f when s is a pointer to a struct. If you're switching from one to the other, you have to go through all your code swapping . and ->. With D, both are .


That's an excellent point. There was a neat paper a few years back where they were making the point that Java could actually lead to better performance in complex scenarios than c or c++ because the team could iterate on the algorithm faster and explore different paths.


This is also the same claim that gets mentioned when k comes up: the shortness of programs leads to finding better solutions.


> I would hazard that Rust does the best job, due to its explicitness / very nature

I might say the same about c, except even more so.

As for 'the performant path', something to watch for there is jai, probably coming out later this year. It has clever things there, like dynamically switching between SOA and AOS, or doing small dynamic allocations on the stack.


C does, until you start making things multithreaded, at which point you either start defensively copying things, or dev times slows massively due to having to maintain complex locking invariants in your head.


I haven't looked at Jai. I'll have to check it out. I have looked at Zig, which seems to inhabit a similar space. They both seem a little too early days for me at the moment (I at least want a chance at using it $work), but I like the general idea of better C.


D has a betterC flag [0]. Though at that point I'm not sure why you wouldn't just write c++17.

[0] https://dlang.org/blog/2017/08/23/d-as-a-better-c/


D's betterC includes the metaprogramming facilities.


And at the very least it has dub right? I've thought about it alot.


I'd love to use betterC more if it had dynamic arrays and assoc arrays. BetterC opens up opportunities such as WebAssembly too.


Webassembly is slowly (but carefully) deciding on a garbage collection spec of its own; once that's available, you should see a whole new slew of languages targeting it, including full-fledged d.


d is just such a pleasure. it's like it was designed to make the programmer happy. maybe that's not a reason to choose it, but it's a huge perk


>d is just such a pleasure. it's like it was designed to make the programmer happy.

IIRC, Walter has said somewhere (likely on the D site or on his site or one of his articles for DDJ) that it was designed for that, actually, or something to that effect [1]. I've felt that myself while using it. I think he also said something like the code should look good on the page.

I agree.

[1] Or at least, make him happy :)


This subfeed of my blog has all my D language posts, about a dozen of them (apart from a few videos of D or related confs):

https://jugad2.blogspot.com/search/label/DLang

They may be of interest to beginners or others wanting to get a bit of a feel for some of the kinds of things D can do, via simple command-line programs, along with checking the D site and tour at dlang.org .


It's a very important point that is often underrated. Programming is a human activity; the tool should make the human happy.


My thoughts exactly. If I want to build something, I just do it, and I really enjoy it.


> iirc there were some studies showing that productivity is not significantly different across languages

If it's the same studies I'm thinking of, the result was that productivity was about the same in all languages _when measured in lines of code per day_.

So say 1k lines of assembler took the same time as 1k lines of Python. However, the higher level languages can _do_ more for the same number of lines.


there things were its the reverse though. Like if you maniplating things at the byte level you have play with packing and unpacking the data unless its a binary format aready undertood by python. Like for instance common steing encoding like utf8. Also if your calling native functions from python that expects structs as arguments its a bit annoying.


That doesn't make sense to me. You might not save much time when going from C++ to Rust, but what about C++ to Ruby or Python?

There are things I can do with a few lines of code in Python that when I looked at how to do in lower level languages I just decided I could live with slower performance.


I do 95% of my dev work in Python so I'm quite biased in its favour.

But I've seen literally 1000x speedups from transferring the numerical heavy lifting to C++, which paid back the time it took to rewrite that code in very short order and enabled us to do something that wasn't previously close to practical. I was pretty sure that the benefit would be closer 10x, but nope!


1000x faster than numpy / scipy ?


Yea...were you using Numpy or just trying to do someyin base Python?


I use numpy/scipy for everything I do by default, but not all operations can leverage it unfortunately.

So what I mean is that the python program leveraged numpy to the greatest possible extent (that I know how), but was nonetheless 1000x slower than the C++ program that produced the same output. I am not making any claim that particular operations like summing up an array or doing an FFT is 1000x faster in C++ than in numpy. Of course it's not. In this case, numpy calls were certainly not the bottleneck, so it would be wrong to say "1000x faster than numpy". But it was, on the whole, 1000x faster than Python leveraging numpy.


> C, D, C++, rust, and nim should all have comparable performance

No, Nim is often among the fastests, sometimes surpassing C. This is because it targets C by default and uses data structures and code paths that GCC can optimize very well.


> C, D, C++, rust, and nim should all have comparable performance; they all go through gcc|llvm, and all have the same optimizations.

That's not necessarily true.

For example language semantics (essentially how much information the optimiser has) such as whether mutable pointers are allowed to alias (or even the existence of non-mutable pointers) can play a huge role in what optimisations can be applied in a given situation and even how effective certain optimisations are. Also default behaviours like D, rust, and nim doing bounds-check can change the practical speed (although not necessarily theoretical speed) of a language. And finally things like garbage collectors can make certain languages faster in some cases but slower in others (again there's a practical vs theoretical argument here, since you could get similar speed-ups using other techniques).

So no they don't necessarily have to have comparable performance, at least not in all situations.


It's a completely meaningless comparison in practice.

D's contract mechanism (i.e. in/out, and invariant blocks) can provide very strong guarantees to the compiler (as a specified part of the language as opposed to a GCC pragma/builtin), which the LLVM D compiler definitely uses.

All of the on by defaults that D has are usually there for a reason, i.e. floats are NaN initialized and bounds checking is on by default: These are very good idiot-proofing which can often be ignored unless profiling suggests there is a tangible issue.


> D's contract mechanism [...] can provide very strong guarantees to the compiler... which the LLVM D compiler definitely uses.

That's awesome. I didn't know LDC uses contracts like that. It'd be nice to have similar functionality across all D compilers.


If you’re looking for more “PL ADD” fuel, don’t forget to look into Zig! https://ziglang.org/

Of all the modern systems level programming languages I’ve seen, I like Zig the most. It combines a powerful, modern type system (generics, algebraic data types, statically checked null references) with an impressive simplicity (e.g. without the productivity-consuming and mentally burdensome “borrow checker” of Rust).


> without the productivity-consuming and mentally burdensome “borrow checker” of Rust

The borrow checker also gives you memory safety, which Zig does not have at the moment.

Solving use-after-free and related issues is a hard problem. I have yet to see a way to solve it that doesn't boil down to one of runtime garbage collection, not allowing heap allocation at all, not freeing memory at all, or a region system/borrow checking. It's fine to not like the borrow checker, but it's there for a reason.


Yeah, but currently it still is a roadblock when writing GUI code, to the point that Gtk-rs samples use macros to overcome the boilerplate to access widget data from callbacks.


I'd not call the borrow-checker "productivity-consuming" : it is indeed a tough challenge for somebody learning the language, but in the medium/long run it becomes totally transparent when you've internalized its expectations. I've been working in Rust full time for one year now, and the only time I really had to think hard about "how to write this piece of code while complying to the borrowck rules", I was really happy to have it, because I would have written a nasty memory bug if nobody told me that I was doing something stupid.

It comes with a steep learning curve though, and I remember giving up three times in two years when I decided to give Rust a try. It eventually clicked on the fourth attempt(Idk if the docs had evolved between the third or fourth time or if it's the result of the slow progress I made through the three first try) but now, I really happy with the language.


Question: For someone who does not know either Rust or Zig, would you say that Zig also has tools that help the programmer protect from data race conditions (I have read that Rust is helpful in this regard)?


I have looked at Zig! It looks sweet, but a little to early days for me personally. For example, there's no documentation on how to open a read a line from a file. The best I can find is an example implementation of `cat` in the source repo. I will certainly be coming back to it at some point though.


Adding to your list:

- Swift: C# like productivity, compiles to native code, however mostly constrained to Apple platforms

- Eiffel: RAD tooling, uses JIT for development, compiles to native code via C and C++ compilers for deployment, mostly constrained to enterprises that value correction above all and are willing to pay its prices

- Delphi: One of the best RAD tooling still around, with compilation to native code, while allowing all the low level stuff from C and C++; Mostly safe, suffers from manual memory management. Borland mismanagement placed it as enterprise devtool. You can use Lazarus/FreePascal as alternative

- Ada/SPARK. Compiles to native. Also enjoys fast compilations. For devs and companies that care about code quality. Besides GNAT all the remaining implementations have enterprise level prices


Ada has always been intriguing to me. Is it used much anywhere these days?


Avionics, trains, oil rigs, basically everything where human life's are at stake, deemed as High Integrity Computing.

Only 4 languages apply, Java with Real Time extensions, C and C++ with certification processes like MISRA and AUTOSAR among others, and Ada/SPARK.


> Essentially there's a class of programs that are performance sensitive, but also dev time sensitive. How do you choose between the set above? (The answer, if there is one, is probably Python, and then optimize the hotspots, but lets pretend we can indulge).

Only if you don't believe other languages can be more productive than Python. IME an ML-like language with type inference and good development tools will be faster to work with than Python, at least as soon as you have to edit code (for a script short enough that you can write it out and expect to be correct first time Python might still win).

I regard the ML featureset as table stakes for a language these days; while most languages (all serious languages?) have first-class functions, proper sum types are so useful that I don't want to use a language without them. That cuts the list of possibilities down quite a lot: in terms of having enough maturity/popularity for production use it probably comes down to OCaml (which becomes the default option by seniority), Haskell, F#, Scala, Rust, Swift, or Kotlin. D or Nim just don't offer anything compelling that's worth giving up sum types for, and while Swift or Kotlin can more or less equal OCaml they don't offer a compelling advantage.

Any of OCaml/Haskell/F#/Scala/Rust is a defensible choice. Haskell and Scala offer higher-kinded types, which are immensely useful once you get used to them. F# and Scala offer decent IDEs/tooling in a way that the others mostly don't. Rust offers a limited form of linear typing, but as a special case built into the language rather than as functionality emerging from a general type system; you can achieve a similar level of resource safety with a rank-2 type trick (famously used in ST) in a language that supports those (i.e. Haskell or Scala).

I use Scala for everything these days. Better-than-Python dev time, better-than-Rust type system, Nim/Java/C#/Go/D/Kotlin-like performance, and first-class Java interop (which I consider better than C/C++ interop, because using C/C++ interop makes your whole program memory-unsafe). Bonus of a compile-to-JS implementation that just works with the same code that you can run in your IDE.


"higher-kinded types, which are immensely useful once you get used to them"

Could you give an example of such immense usefulnes?


All of the problems that aspect-oriented programming tries to solve, but without any reflectioney magic. So for example I have a custom type to represent database operations that need to happen within a transaction; the type system ensures that there will be a transaction boundary around them eventually, but I can still compose together several different such functions and run a single transaction around them all. In theory you could do this with a "command object", but in a language without HKT you'd have to reimplement all the utility functions that make it practical to use those objects (e.g. traverse, which takes a list of must-happen-in-transaction commands and combines them into a single command that will evaluate to a list) and in practice people doing this kind of thing in Java/C#/Python/... give up and fall back to reflection/AOP/metaclasses/decorators, because it's too cumbersome to manage the transaction boundaries explicitly. But then you get things like http://thecodelesscode.com/case/211 happening.

With HKT you can have reusable, generic functions that work on any kind of effect - all the examples on https://philipnilsson.github.io/Badness10k/escaping-hell-wit... and more. E.g. a tree structure library will offer a traversal method that can already handle doing an effectful operation at each level and composing together the effects correctly - even if it's an effect that didn't exist when that library was written. That makes it practical to manage these effects/cross-cutting concerns explicitly, which enables fearless refactoring (it's always clear whether you can e.g. reorder some statements without changing behaviour or not), so you need much less test coverage for the same level of confidence which in turn makes refactoring even easier, and you end up with a virtuous cycle where your code stays clear because it's always easy to refactor for clarity.


Thank you, I've read the links, but without an example of Haskell/Scala code the benefits for any particular real problem are not clear to me.

Specifically, speaking of the second link, what happens if you need to get rid of two types of 'hell' at the same time?


> Thank you, I've read the links, but without an example of Haskell/Scala code the benefits for any particular real problem are not clear to me.

Do you have cases where you've used AOP/reflection/metaclasses/decorators/macros or similar "magic"? (Exceptions or mutable variables sort of work as examples, but most people don't have experience of working on codebases that don't have them). It's hard to give simple examples because these things exist to solve complex problems - one should never reach for a monad where a plain function will do.

> Specifically, speaking of the second link, what happens if you need to get rid of two types of 'hell' at the same time?

There are several possibilities. First, it's worth noticing that most forms of "hell" are fine on their own, it's the interaction between two that tends to be surprising - e.g. https://glyph.twistedmatrix.com/2014/02/unyielding.html talks about how important it is to explicitly manage async, but it's making an underlying assumption that your language will have mutable variables. Unmanaged, pervasive mutation in a language without async is ok; equally, unmanaged, pervasive async is ok in a language without mutation. So often you can get away with managing one effect and letting the other happen naturally according to the program control flow. (This is less true in Haskell because laziness is built into the language, so Haskell has kind of "spent its freebie" there - but it can be a very effective style in Scala).

Second, if you need to work with a particular combination of effects then most effects are available as "transformers" which can be stacked together and the monadic functionality operates the usual way on the whole stack. E.g. several of my applications work with a type that's something like EitherT[WriterT[Task, Seq[AuditEntry], ?], ApplicationError, ?] - an async operation that writes entries to an audit log and might fail with a constrained set of possible failures. Within a single application you usually have the same pattern of effects that you want to manage it, so you can give your "carrier" type a friendly alias and write a handful of helper methods (which are really just simple combinations of standard lifts, but they get used so often that it's worth having short and domain-appropriate names).

Finally if you're writing a library that needs to be reused with different "effect stacks" the usual approach is mtl-style typeclasses (though there are alternatives - I prefer free coproducts). You write functions in terms of a generic F[_] type (that will actually be one of those concrete effect stacks) with typeclasses that represent each "capability" of an effect that you want to be able to handle. This sounds complicated but it's actually the normal way of writing generic functions in Haskell or Rust (which don't have inheritance), and it's very easy at the point where you're writing the business logic: you just write a big "do" block doing all the things you want to do, and e.g. if you need to call "async" then you add "MonadEffect Async" to the function signature.

Of course there's no free lunch - ultimately you will have to define the details of how the async effect gets interleaved with your other effects somewhere, but this technique lets you move that decision out of the library and into the application that's calling the library: there are several different stacks that can be constructed that will conform to the given interface, and the details of how you stack the effects define how the composition will play out. For example if you have a function that both emits log events and could fail, then you can call that with EitherT[WriterT[...], ...] (the log events are always recorded even after the computation fails), with WriterT[EitherT[...], ...] (a failure doesn't include any log events), or with a custom implementation that did something special like marking the last event before a failure in some way.

https://www.parsonsmatt.org/2018/03/22/three_layer_haskell_c... takes a somewhat different approach to application design from what I do, but it's a more comprehensive example of how everything fits together in a full application.


Thank you, I was thinking about your own example with operations and transaction from the previous comment. How would the code look in your approach?

Unfortunately, I have very basic knowledge of Haskell and can't appreciate the last link, however I've got the general idea of the second part of your comment (I think, maybe not) and two things bother me:

1. What if I find it convenient to two use different effect stacks in two different parts of my app? How do I write the code that call functions from both parts of the app?

2. Suppose I use the same 'carrier' type in the app and one day want to handle one more effect, what changes to the code using this type does this entail?


> How would the code look in your approach?

We'd use "do notation" in Haskell, or "for/yield" in Scala. So in Haskell it would look like any of the "escaping hell with monads" examples, or in Scala the code might look like:

    for {
      user <- loadUserForUpdate()
      groups = calculateGroupsToRemove(user)
      _ <- groups foldMapM {group => remove(user, group)}
    } yield groups
We can see that calculateGroupsToRemove does not participate in the transaction but loadUser and remove do. loadUserForUpdate might return MustHappenInTransaction[User] whereas calculateGroupsToRemove just returns a Vector[Group], but it's useful to be able to see the distinction at the call site. Note that we have to use the (standard/well-known) "foldMapM" function to map over the list of groups, instead of "foldMap". You can't write these generic helper functions in a language that doesn't have higher-kinded types, and without them this technique would be too cumbersome to use in practice.

The code evaluates to a MustHappenInTransaction[Vector[Group]] or something like that, and eventually at some point we have an unsafePerformTransaction function that actually runs the transaction. It's still possible to mix up transaction boundaries by calling unsafePerformTransaction multiple times, but, like with the "unyielding" example, it's an opportunity to notice any mistakes.

> 1. What if I find it convenient to two use different effect stacks in two different parts of my app? How do I write the code that call functions from both parts of the app?

To write functions that you can use from both parts of the app, you'd need to use the same approach as if you were writing a library (i.e. MTL-style typeclasess). Then at the use sites you just call the functions - it's actually just generics. Like, if you have a generic sort[T: Sortable] function, you can call that either from another generic [T: Sortable] function, or you can call it with a concrete type like String (as long as that type is sortable). In the same way, if you have an "effect-generic" function you can call it either from another effect-generic function with (a superset of) the same constraints, or you can call it from a concrete effect stack (provided that concrete stack satisfies those constraints).

> 2. Suppose I use the same 'carrier' type in the app and one day want to handle one more effect, what changes to the code using this type does this entail?

If you're using the "single global carrier" approach, what I'd do is redefine the type alias, reimplement the "helper" methods to do one more lift, and then code that's just using the carrier will stay the same. E.g. when I add the audit entry functionality, I change

    type ApplicationStep[A] = EitherT[Task, ApplicationError, ?]
    def wrapSuccess[A](t: Task[A]): ApplicationStep[A] = t.lift[EitherT[?, ApplicationError, ?]]
    ...
to

    type ApplicationStep[A] = EitherT[WriterT[Task, Seq[AuditEntry], ?], ApplicationError, A]
    def wrapSuccess[A](t: Task[A]): ApplicationStep[A] = t.lift[WriterT[?, Seq[AuditEntry], ?].lift[EitherT[?, ApplicationError, ?]]
    ...
    def wrapAuditEntry[A](ae: AuditEntry): ApplicationStep[A] = WriterT.log(Seq(ae)).lift[EitherT[?, ApplicationError, ?]]
and then code that uses the existing type and helpers remains the same, and new code can use the new helper. Code that works directly with EitherT would need similar changes (extra .lift call).

If you're using the "mtl-style typeclasses" approach then you don't need to make any changes to code that doesn't emit the new effect. Functions that want to write audit events will need an additional ": MonadTell[AuditEntry]" constraint, which will ripple up to functions that call those functions, and then at top level the type used at the entry point has to change.


Well written answer, thanks for sharing


Ada is being actively used by Nvidia in some of their research:

https://blogs.nvidia.com/blog/2019/02/05/adacore-secure-auto...


I have always wanted to look into Ada, but could never see being able to make a serious business case to use it for real things at the moment.


It's used widely in defense, much less in private industry. The exception is if you write Oracle PL/SQL - that resembles ADA.


FWIW Java is is gradually working in some promising new productivity features (many of which fall under the Project Amber umbrella).

https://news.ycombinator.com/item?id=20269840


Java's productivity is high with good choice of libraries and an IDE that Intellij that offers fantastic refactoring, code generation and code-intention abilities. And Kotlin's productivity is even more higher since all the above apply along with convenience language features for succinct and functional-style coding.

So I would rate Java as fast and Kotlin as very fast in the dev productivity scale. You can really push the pedal in these two if you are working in an IDE and you can change your design iteratively on the go thanks to excellent tooling.


On the performance dimension, I am pretty sure C++ effectively passed C a while ago given good compilers. At the very least, it requires significantly more code and effort to make C as fast as modern C++.


> I am pretty sure C++ effectively passed C a while ago given good compilers

Citation? Every benchmark I've ever seen still has C coming out on top: https://benchmarksgame-team.pages.debian.net/benchmarksgame/... . The gap has narrowed and c++ can be better in specific situations but it has not "effectively passed" C, not to mention uses less memory. Here is another one: https://greenlab.di.uminho.pt/wp-content/uploads/2017/10/sle...


It is always a matter how good are those benchmarks written.

For example, good luck achieving in C what is possible to do in C++ with constexpr at compile time, and will require an additional runtime cost during the benchmark execution.

Or sorting data via qsort() versus STL sorting algoritms with templates, which in C always require the runtime cost of an additional indirect call.

Or now with C++20 contracts, validation of pre-conditions, which with C are only possible at runtime, while C++ can already make use of the information if available at compile time.

C++Now 2019: Odin Holmes “Hey C, This Is What Performance Looks like"

https://www.youtube.com/watch?v=CNw6Cz8Cb68


> qsort() versus STL sorting algoritms with templates

This is one of the most popular examples for superiority of template instantiation. Yet when I measured the speedup on integer arrays (where arguably you can expect the largest speedup) it was less than 2x. Now considering also the tradeoffs that come with using C++ (larger compile times, larger binaries), and the fact that sorting is rarely a bottleneck, I'm not so sure anymore that std::sort is generally superior.


That is a popular example, because it is a simple way to describe the difference between templates versus bare bones C coding.

And in some scenarios even a little 2x speedup does impact

With template specializations that speedup can be increased even further and since compiler does see template bodies, there are even more opportunities for the optimizer to do its work.

C++ can be used in something like a Commodore 64 just fine, in fact with smaller binaries thanks to constexpr all things.

CppCon 2016: Jason Turner “Rich Code for Tiny Computers: A Simple Commodore 64 Game in C++17”

https://www.youtube.com/watch?v=zBkNBP00wJE


A decent C++ compiler will also inline the sorting for something like:

    constexpr size_t N = 3;
    void sort_N(std::array<int, N> a){
         std::sort(std::begin(a), std::end(a));
    }

but I haven't checked if it does the same with qsort. But I agree with your larger point; I find the efficiency claims for C++ frequently overblown and the compile time burden substantial (OTOH it's still very hard, but slightly easier to write robust code in C++ compared to C).


2x is actually pretty significant consider the only difference is the way the comparer is called. Also consider how is easy it is to apply similar boosts in many scenarios with little mental overhead and it's a significant win for c++.

C has _Generic though which can do the same, it seems a bit unfair to be comparing an old method kept for backward compatibility with C89 with whatever version of c++ and the compiler gained this advantage.


2x is only for integers, of course. Often you'll have bigger structs to sort.

But anyway. If performance matters (it mostly doesn't; sorting is usually not a bottleneck), you can reap much more than 2x by various tricks

- Improve the data by adding easier to compare fields

- tailor sorting algorithm to the data (what machine types? can we optimize with CPU-specific assembly? Can we do radix sort?, etc)

- avoid the need for sorting altogether. Often, the data can be constructed in a better way such that it falls out sorted.


> It is always a matter how good are those benchmarks written.

I agree it's not perfect, but I'm yet to see a single set of benchmarks show that "C++ effectively passed C a while ago" and yet I see it claimed somewhat frequently around here. Generally the games I see around benchmarks go the other way though, with people effectively writing C in the comparison language and not idiomatic code.

> For example, good luck achieving in C what is possible to do in C++ with constexpr at compile time

The only thing constexpr solves AFAIK is doing it all in c++, generating compile time constants is hardly magic and it's been solvable with anything from a shell script to something more complicated forever.

> Or sorting data via qsort() versus STL sorting algoritms with templates, which in C always require the runtime cost of an additional indirect call.

It's not as ergonomic, but things like this are also doable with _Generic in modern C.


Then look at Eigen vs alternative C libraries for example.

Constexpr allows for compile time code execution, introduced in C++11 and extended with each C++ revision, C++20 version even allows for partial STL use.

There is plenty of stuff one can do with it, when coupled with template metaprogramming, it is basically C++'s version of Lisp macros, although a bit clusmy.

Generating constants is just the tip of the iceberg.

Using shell scripts is language agnostic solution.

_Generic is very constrained, try to implement std::sort() for any kind of datastructure with it, not only basic numerical types.


On the other hand, there is exactly one copy of qsort in your entire program image, no matter how many different uses it is applied to. That's better from a caching POV than numerous expansions of templated sorting.

Caching is everything; something that does an indirect call, but through a hot cache, is usually better than something that avoids it, but thrashes the cache.

Caching is so important that byte code that fits into a L1 cache (together with the byte code dispatcher) can beat native code that doesn't fit.

Indirect calling itself isn't so bad when it isn't computed. The comparison function being called by qsort stays the same over the entire job. The source operand which provides the call address is known way in advance of the indirect call, making it possible to pipeline through that without stalls.

> good luck achieving in C what is possible to do in C++ with constexpr at compile time

You can always just type out all the code you want by hand, including all the cases, using a liberal sprinkling of copy and paste. You can open-code some quicksort by hand, or else piece it together out of some prolog/payload/epilog type macro fragments or whatever.

It's just not generic and reusable that way; but in many C programs, that doesn't matter at all.

At the end of the day, we are somehow still using LAPACK routines written in Fortran for number crunching.


constexpr is much more powerful than just generating constants (although that is one of the bigger use cases). One of the things I like is how it simplifies doing static dispatch on template types, essentially using SFINAE under the hood but in a much simpler way to read:

    if constexpr ( std::is_same_v<T, int> ) { ... }
Vs

    template <typename T, typename std::enable_if_t<...>* = 0>
    void fn( T t ) { }
Being able to generate constants using the language itself is super powerful as well (the below is a bit contrived, as std::string_view is constexpr and obsoletes this pattern):

    static constexpr std::array<char[], SIZE> strs = { ... };
    static constexpr std::array<size_t, SIZE> str_lens = calc( strs );
    // Where calc() loops over the array and calls std::strlen, all at compile time


"it's been solvable with anything from a shell script to something more complicated forever"

By this logic C as good as any language as long as you can compile that language to C.


The logic is more "why add complicated features to a language when there are existing ways to have the same functionality". C++ is already over complicated, the last thing it needs are more features. This effort is also being replicated across several languages with all there own syntax and warts. Generating source code via a script or template library is much more compositional.


I'm not sure if adding an additional language or two to C makes the codebase simpler than pure C++ code.

What's more, I've witnessed quite a few times the code generating program becoming incomprehensible after more and more parts of the code gets generated conditionally.


And more error prone, while a type system enforces correctness.


If you're using macros or another language entirely to generate C code, it's often to preserve type safety. Otherwise it's often easier to typecast everything through void pointers.

It's the same situation with Go, so it's not like C is entirely anachronistic with regards to the trade-offs it makes.


Macros and type safety don't really go hand in hand, hence why ISO C++ is progressively removing any reason to still use them beyond conditional compilation.

Go, just like C, is stuck on a by gone era of programming languages.


Good point, I had always just taken for granted that C would be faster, and it's not! (Obvious disclaimers about benchmarks inserted here https://benchmarksgame-team.pages.debian.net/benchmarksgame/...).


I'll point out that those benchmarks show C++ faster for four, C faster for five, and both the same for one scenario.

Modern C++ enables some very concise expression, at the cost of significant language complexity, and enormous compile-time cost.

C requires more work from the programmer, but is vastly quicker to compile.


> and it's not!

Well those numbers seem to show some C programs are faster than the C++ programs they are compared to; and other programs are not.


Julia is the fastest (in speed achievable in a short dev time) language I’ve ever used. Especially for any scientific computing workloads.


I really want to love Julia. I've never gotten over the startup time, slow string functions, and package manager repl thing. Perhaps I was getting into it at a bad time, right around the time the new package manager was being announced. The whole thing felt very magic... too R like for comfort.


The start up times really bug me too. I think the solution might be sticking to a Jupyter/Juno session or keeping a repl open and reloading. https://github.com/timholy/Revise.jl

It is incredible the degree with which small inconveniences can inhibit adoption.


Probably not easily embedable on a vehicle though.


"- Crystal: ?; fast?; have not used"

Ruby is an extremely powerful language, with the only drawback of being truly slow. Crystal solves this problem with flying colors by offering almost full Ruby compatibility along with very efficient native code generation. Unfortunately it's still young and lacks user base as well as ports to other architectures.


Lacking the Ruby ecosystem means it lacks like, 90% of the reason you’d choose a language.


>almost full Ruby compatibility

Crystal is very different from Ruby. Hello world will copy paste but there are major differences after that.


Yup, quite fast. On par with Go. Compiling Crystal is also fast and the language is beautiful.


In a discussion involving C, C++, Rust and Nim, "on par with Go" is probably not considered fast. Also, it would seem Crystal is closer to those languages I mentioned than to Go in speed.


According to the benchmarks at https://github.com/kostya/benchmarks, Crystal is about twice as fast as Go.


You'll definitely want to check out ATS[0].

It's a functional, dependently-typed language with performance on par with C(its compilation target), and has a type system and theorem prover that guarantees memory-safe code.

[0] http://www.ats-lang.org


I am personally very intrigued by ATS, I think it is striving for a truly unique and powerful point in the programming language space, but have extreme reservations about it being ready for general use. Do you consider it ready for adoption, or are bringing it up as a learning exercise?


> I am personally very intrigued by ATS

As am I. I'd say this, along with Idris and Agda are showing how useful types can be.

> Do you consider it ready for adoption

People have written non-trivial software in it[0], but because it hasn't yet reached a 1.0 release, I wouldn't say it's quite ready for adoption. It seems to be more of an academic project for now.

[0] https://github.com/xlq/aos


> - Nim: C/C++ interop is second to none.

This is very wrong.

There is perfect c interop - you can import c functions using {.importc.} pragma.

C++ interop is best i have seen so far. You can use c++ template types! Here is the sample: https://github.com/3dicc/Urhonimo/blob/master/modules/contai... Of course it is not perfect. For example if you would like to override c++ method you will have to write a bit of c++ code using {.emit.} pragma. I do not know a single language that could map c++ template types into it's own generic types like nim can.


"second to none" means "best" ("there is no one it is second to, thus it is first")


TIL. Not native english-speaker. My bad.


You made my point better though! Nim interop is awesome.


D's C++ interop looks better at a glance.

With a simple extern(C++) you can use templates, classes and vtable's are matched up to single inheritance. There is also some experimental work on catching C++ exceptions but I've never tried to use it.


Also: COM support. This can be incredibly useful for interop.


Specially since Windows has gone full COM for all new major APIs.


"second to none" means "the best".


I have chosen Go for now, after having learned Nim and Rust. I also know Ada, FreePascal, Racket, CL and a bunch of other languages fairly well. Python was never an option, because it's too slow and (subjective opinion) a very ugly language. There were clear reasons why Go was the best choice for me:

Pros: - large infrastructure, many third party libraries - good tooling - fast compilation speed - reasonably fast executables (fast enough) - (just) good enough GUI bindings - although it almost failed on that one - automatic garbage collection Cons: - lack of generics - a bit too much verbosity (if err!=nil) - million dollar mistake

Subjective assessment of the other languages (for me, and my type of projects - remember, a programming language is only a tool for a certain purpose):

- Nim is a great language, but unfortunately seems to be eternally unready for prime time. Lack of complete enough GUI bindings.

- Rust is a convoluted language and has no garbage collection. It is marginally better than Ada at the cost of readability, has a number of minor misfeatures, and an unnecessarily steep learning curve. I predict it to have the same complexity or higher than C++ in ten years from now, with all the disadvantages and security issues this brings. Especially macros are a very, very bad idea. (They also count against Nim in my book.)

- D is a very good language and on my list of things to learn. Reasons for deciding against it were primarily its nearness to C++ and the express intent of the developers to get rid of garbage collection in favour of some convoluted, needlessly complex borrow checking. (Unfortunately, D also has macros, but they are less obtrusive and less needed than in Rust.)

- C: dev time too slow

- C++: dev time too slow, too many pitfalls (even if you use just a subset, at some point you will have to deal with esoteric code and bugs that require 40+ of experience to understand); ugly language.

- Kotlin: Java VM is out of question, the long-term technology support on some platforms is too uncertain.

- Crystal: issues with multithreading (if I remember correctly), lack of a good enough GUI binding.

- Java: out of question - I consider it legacy technology, I've used it in the past and found the frameworks to be horrible (too much OOP); ugly language; future VM support too uncertain on some platforms.

- Julia: very nice language, fast; it lacked a good enough GUI binding last time I checked, but once it has one, it will be on my list of things to test.

- Zig: interesting but too close to C, no garbage collection, only manual memory management.

- FreePascal/Lazarus: probably the best GUI support but mostly manual memory management and felt kind of old; basically, I didn't choose it for fear of accumulating too much technological debt.

- Ada: probably the static language I like the most, has everything I need, but is pretty much dead - not enough third party libraries, single vendor lock-in, only GTK as viable GUI option, fear of accumulating too much technological debt; it's a pity, because I think it's the best language among those mentioned so far.

- Racket: my main language for the past 20 years, I have developed successfully GUI applications with it, large tooling, fast enough, BUT: application startup time too slow, rich text editors too sluggish, GUI slightly too limited, professional deployment can be complicated (by professional I mean with all correct metadata, icons, according to OS guidelines, code signing, sandboxing, etc.), dynamic typing bad.

- CommonLisp: probably a good choice for high-tech long-term, large-scale web-based applications like booking systems or scheduling systems; libraries suffer from lack of documentation, executable sizes and memory consumption fairly large, bindings to C/C++ libraries notoriously incomplete or undocumented; dynamic typing bad.

Okay, that's it. These are all purely subjective evaluations for the purpose of writing fun, fast, compact, mid-size cross-platform GUI end-user applications. I should mention that in a more professional setting I would probably bite into the sour apple and just go with C++/Qt. But that wouldn't imply having fun, which is one of my goals.


> the express intent of the developers to get rid of garbage collection in favour of some convoluted, needlessly complex borrow checking

This is incorrect. Nobody wants to get rid of the GC [1], but to offer the possibility of writing GC-free code or just work around the GC and prevent it from allocating if one wishes to do so. There is a full section on the D website about the GC and how to optimize its behaviour or work around it [2].

1. https://forum.dlang.org/post/mailman.2288.1523320489.3374.di...

2. https://dlang.org/spec/garbage.html


Okay, my apologies then. That's pretty cool and IMHO exactly how languages should do it. Give people a good GC and a way to switch it off when needed while keeping it safe. That's perfect!


> Rust … has no garbage collection

What's your reasoning behind preferring GC to Rust's automatic memory management?


Like everyone else's who prefers to have a GC. Go's GC is fine for all of my use cases and I'd rather not spend endless times musing about lifetime annotations and how to 'trick' the borrow checker into doing what I want. Technology should serve humans, not vice versa.


> - Nim is a great language, but unfortunately seems to be eternally unready for prime time. Lack of complete enough GUI bindings.

Out of curiosity, which GUI bindings did you try/want?


nimx and a bunch of others. At that time I didn't know what framework I'd want, only the requirements: Cross-platform, preferably native controls, internal and desktop drag&drop, extended clipboard support, rich text editor fields with possibilities of custom ranges, styles, and serialization, listboxes/grids with images and embedded controls.

I later decided on GTK3, not sure if it's the right idea in the long run. Nim's GTK3 bindings state:

> This package is only tested with 64 bit Linux currently.


>million dollar mistake

  echo "million dollar mistake" | sed 's/mil/bil' # :)


>Especially macros are a very, very bad idea.

Why? Is it because the code you see is not the code that runs?


Partly Yes. If they are used correctly, then they are fine, but often they are used to create all kinds of hidden side effects - sometimes even global side effects.

The other issue is that macro authors tend to create embedded DSLs. That makes the code hard to understand, unmaintainable and write-only-once in the long run.

Strangely, I find the Scheme and Lisp community to be pretty good at creating reasonable macros. (With exceptions, I'm not a fan of the LOOP macro.) Maybe that's because they use them less for syntactic sugar and more for really expanding the language. Still, there are too many DSLs and you can be lucky if you understand your own CL code ten years later.


Thanks. I have the same feeling about the tendency in Ruby to use magic tricks. Prefer the Python way, although it does have support for meta-programming, etc. People tend to write magic code less in Python, AFAIK.


D doesn't have macros in any way I'm aware of?


D templates plus string mixins cover a lot of the same ground as macros. This is probably what the poster was alluding to.


>(just) good enough GUI bindings

Which one(s) do you use?


Currently GTK3, because Qt is too large and bulky. It works, although the Go bindings are missing some essential features, Custom TextBuffer serialization seems to be missing and I hope I will be able to implement it myself or work around it.


Thanks.


Oh! Where did my Pony (https://www.ponylang.io/) go? :)


The biggest dev-time factors in my professional work time are:

1. A (de-facto) standard toolchain, incl pkg management

2. A rich ecosystem of 3rd-party packages.

Which sadly rules out a number of languages that would otherwise be very interesting.

Without those 2, I can spend hours fighting with implementing code that I know could be done in a few minutes with a more widely-used language.


Since you are asking about performance of some languages, you may find this helpful https://github.com/kostya/benchmarks


> but also dev time sensitive.

What do you call dev time?

Time till we have a prototype to play with and refine the idea?

Time till we release a quick & dirty version and start counting the hours before the first bug report?

Or time till we can ship and forget?


Golang has good cross compilation built in. You can easily build Linux binaries on a Mac. Makes deployment and containers almost as painless as the interpreted/jited languages.


Only works for pure Go code.


if you're not confining yourself to c-like languages, i like ocaml a lot too for a combination of fast dev time and good performance. has some cross-platform issues and multicore support is still lacking. if you have some familiarity with rust you should already be used to a lot of the ideas present in ocaml.

also, if you are already familiar with c#, there's f# which is essentially a port of ocaml to .net. won't improve the performance of c# but might improve the dev time a bit.


I have gone down the ocaml rabbit hole before. Likely, it was too early in my programming career and I got wrecked by the combination of novel concepts, poor docs, disjointed toolchain, and lack of ecosystem. I have, in more recent times, played with F# which is very nice.


A lot of these issues are now solved. OCaml universe now switched to Dune[1] as an uniform buildsystem, using ocamlformat[2] as an uniform formatting tool (succeeding over ocp-indent), Base[3] standard library aims to be the only one, a lot of legacy cruft was removed in recent compiler versions, along with improved error messages. Real World OCaml book[4] is now being modernized for the second edition. Recently even the design of documentation produced with odoc was improved and cleaned up a lot So it tries to catch up with more modern programming languages.

[1] https://dune.build

[2] https://github.com/ocaml-ppx/ocamlformat

[3] https://opensource.janestreet.com/base/

[4] https://dev.realworldocaml.org/


- D: lightning fast dev time; fast compile times; performance directly comparable to C and C++; metaprogramming capabilities second only to Lisp.


Is Rust slower dev time compared to C++? That doesn't sound good. Also C# dev time is much faster than C++. C++ done right is hard.


I adjusted them to both be 'slow/medium'. It's all rather subjective and handwavy to be sure.


Maybe another interesting factor would be learning curve. I bet C++ and Rust have a pretty steep one compared to Go for example.


Try to write GUI code in Rust and then in C++.


There are other factors to include as well. Monitoring and management are quite important, and this is where environments like the JVM and .NET do a very good job. Ecosystem maturity, availability of mature and quality libraries, availability of good quality documentation, community size and support.

I find these guidelines to be a good starting point when deciding what stack to go with, as opposed to what the latest hyped languages are. We've already seen people go with hype, only to realize and backtrack their decision to use a more mature and well established language and ecosystem.


These are excellent points. I haven't spent a lot of time in JVM land. But the tooling and resources in .Net land are fantastic and can't be ignored.


D for me is a guarantee of continuity. Unlike languages kept on corporate "life support" there is not so much risk of it becoming abandonware or sponsors "pulling the plug" on it.

I'm very sure that C will outlive Go, Rust, and probably us all. I'm not so sure about the current wave of "a better take on C++" type languages


C will outlive those languages as long as we keep POSIX and UNIX clones alive, that is all.


C won't have slow dev time, assuming that experienced devs are writing the code. There are a ton of abstractions directly accessible from C, and lots of people know them really well.

C might not be fastest though. Rust can make more optimizations than C for example. Fortran is often faster than C too.


C programmers have to spend more work writing and maintaining a line of code given its low level focus; if you substitute in experienced programmers you might as well do that for the other languages as well.


I really want an excuse for learning D. I'm currently working as a full stack web developer using Node.JS and JavaScript. What I like about D on the surface is that it compiles fast, so I'll still have a fast feedback-loop. It has support for named lexically scoped modules which I like so much about Node.JS, but without the performance penalty. It doesn't seem to enforce any particular programming paradigm. And that it allows another level of optimization which you can only get from a "systems" language.


The vibe.d[1] framework is nice, for server-side; maybe rewrite one of your services in that?

1: https://vibed.org/


Another side note is that the vibe.d devs got sick of the callback hell of nodejs[0] and made Vibe.d as a result.

[0]: https://vibed.org/features#simplicity


I love how in vibe.d you have a single callback function which can then just deal with its client with code that looks a hell of a lot like blocking code but is secretly yielding control to other connections whenever it'd block.


Callback hell was solved years ago with promises and async-await.


Actually I prefer callbacks over promises, but it can sometimes be nice to have async/await aka. pseudo-treads / co-routines or "fibers". I belive it's much easier to learn about first class functions, then to learn the concept of Monads, and Promises was rushed into the JavaScript language, while it would have been better to wait for userland libraries to settle on a standard before carving it in stone.


as a d user, i definitely do not recommend d for web development. for everything else, go for it. it's just because d does not have a mature web development ecosystem.

i'm hoping this will change soon but this is the case for now. i simply don't want you to be disappointed when you finally decide to jump in.


I do not like web frameworks. All I need is a websocket module with legacy fallback. Something like SockJS, to build a JSON API on-top of. And a database module for persistence. But I use Node.JS for almost everything; image rendering, network services, utility tools. etc. And if a bash script requires more logic then a bunch of unix pipes I usually write that in Node.JS too. I like that D can be used like a scripting language. Node.JS started out as an async networking framework focusing on performance, simplicity and fast development. Despite it's huge potential it has mostly become a scaffolding tool for web frameworks. What concerns me most though is TypeScript as well as EcmaScript piling on mostly useless features to the JavaScript language, making it harder to learn, and dividing the community.


I've only dabbled with Vibe.d, but it seemed quite mature to me. Well, maybe not "mature" but usable; I would have liked a little bit more documentation for Diet templates, but they weren't that hard to pick up from examples.


In their job post[1] title&description, they specifically mentioned that D programming language is used for "Software Verification & Validation". So it's possible that the core tech stack may or may not be written in D.

[1]: https://jobs.lever.co/aid-driving/c4b243bd-c106-47ae-9aec-e3...


Our core stack is written in C++ so most of our open positions are about C++. Have a look at https://jobs.lever.co/aid-driving?department=Software%20Engi...


Was it a requirement for 'Software Verification & Validation' to use a different language?


D is the only language I use any more, so it's neat to see others have made similar realizations about it.


Do you get to use it for work? Or just for personal projects?


Given how few companies use it, we can safely guess personal projects.


Given his user name he may be from the Funkwerk: https://dlang.org/blog/2018/03/14/user-stories-funkwerk/


Sell me on using D, I have no skin in the game, just curious of what fans of D say.


It was one of the easiest languages for me to learn, and I felt I was as productive in it as Py/Node (that I've used for years) with only about 2 weeks of playing with D on the side. There's a lot of intuitional things that D gets very right imho, from syntax to the standard lib. Also the generic/template support in D is a real pleasure to work with coming from most other languages I've used.

There's a couple challenges though. D's community for library support isn't as rich when comparing to even newer languages like Go. The other issue is D's runtime gc (which is being slowly removed/reduced) is pretty slow compared to Go, and is similar to Python in performance.

The shame is, on paper, D could replace Py or Node for being a very developer productive language. However, it can't compete with them because it doesn't have the same massive community. It also isn't performant enough to stand out against Go and Rust. So D is sitting in an uncanny valley without a silver bullet to stand out against any one particular audience. In many ways, D reminds me of Plan9. For me personally, it's one of my favorite languages to program in (it "gives me joy"), but I currently only use it for side projects. My hope is in these active projects:

a) D minimizes gc use and makes the gc as fast as Go's, allowing it to complete in the performance [web app] category. Vibe.d performance competing fairly in the top of techempower benchmarks.

b) Rust adds more sugar to improve developer 'ergonomics' and finishes the async additions.

c) Go gets "D like" generics and macros (Shh... I can dream)


Re GC, it is most definitely not being removed. Dependance on it is being reduced (with @nogc) and if you don't allocate from it, it will never trigger.

It was recently made multithreaded for the mark phase (sweep is still single threaded), which for large heap small garbage has made it significantly faster.

There are also fork-based GC's available for linux where the mark and sweep are done by a separate process.

The GC is stop the world so if you care about latency the default is not so great, but there are plenty of groups using D for hard realtime systems with either no use of the GC or the fork based one. Speed wise the default is not too bad.


I'm not a fan of relying only on any single language, so take what I say with a grain of salt, but D comes with a lot that some people wouldn't expect.

It covers a lot of low-level and high-level concepts, and has a decent package manager attached to the ecosystem. (dub).

Which means you can magically do things like:

     #!/usr/bin/env dub
     /+ dub.sdl:
     dependency "vibe-d" version="~>0.8.0"
     +/
     void main()
     {
         import vibe.d;
         listenHTTP(":8080", (req, res) {
             res.writeBody("Hello, World: " ~ req.path);
         });
         runApplication();
     }
Whilst at the same time D also supports a 'betterC' flag so that transferring your low-level code to a language with higher-level support and higher levels of safety is fairly easy.

It has generics, compile-time evaluation, a great FFI story, and a fairly decent ecosystem of packages out there for you to play with. It has threading and multiprocessing and... So on.


I've not fallen down the D rabbit hole entirely, more focused on nim, but both of them can consume and spit out C APIs like nobody's business. See some great C lib you don't want to hack on, but do want to utilize? D should work well. Want to offer your library to the wider family of C-like languages? D can do it.

Walter bright and co. having designed the compiler so well sometimes makes you feel like your running a interpreter instead, so iterating on your code is comfortable as well.


D have one thing going for it, it has Walter Bright and Andrei Alexandrescu

Other than that, not so much

D is a language with no clear strategic advantage

D is language that doesn't have enough high level features to make it a top choice for application development

And is not low level enough to be a top choice for systems programming

D's failure is at a strategic level, it wont be solved by adding new features or even removing features


Actually I think D would be helped by removing features and focusing on really refining the existing stuff, especially CTFE which I think is the real selling point of D.


What kind of features do you think should be removed? There were some people on the forums that were in favor of removing any feature that could be reimplemented with templates as a library solution.


Optional parentheses, UFCS, BetterC, named character entities and the objective-C & C++ interfaces to name a few.


TBH, I'd be in support of removing every feature related to C++ interfacing. To interface with C++, you need to become like C++. If you become like C++, people will just use C++. D has spent considerable engineering effort in various areas in chase of a mythical "C++ programmer", but those people seem to be moving to Rust instead.


Removing the C++ interfaces? This would completely break a decent chunk of the compilers of D, no?

betterC, UFCS are both opt in so what's the rationale for breaking a huge amount of library code?


The rationale would be focusing the efforts of the small development community on the important features and having to worry less about breaking existing functionality.


So we remove a bunch of well tested, working language features on the basis of making the community do less work?

UFCS will probably never change, and even it did it's actually a relatively trivial part of the compiler to implement (i.e. Just a change to the grammar and a basic lookup and type check)


>So we remove a bunch of well tested, working language features on the basis of making the community do less work?

No, it's about focusing effort.


Some polls were tried but you will hardly get any agreement on which features to remove.



Removing what?


I've wanted to get into this field for a while now - how does one get in? I'm hardly going to build myself a hobby self-driving car!

While I'm working full time as a backend engineer, I do have final placement I am yet to complete. Would doing it in computer vision-y areas be of a particular value add, or would my best bet be skill up in C++ and then look for a relevant job?


Check out udacity self driving car class.


If I found the right one [0], it's a bit pricy, seems to encourage doing it in a burst. Looks like it's very connected to the industry, at least in terms of course load. Thanks for the pointer, will look into it some more when I get tired of my current job (getting there slowly but surely!)

[0] https://www.udacity.com/course/self-driving-car-engineer-nan...


I thought that was a joke! Awesome though!




how safe is D, compared to C/C++ on one end and Rust on the other end?


If you use the GC (and don't go out of your way to hide pointers/references from it) it's as memory-safe as any GC'ed language. If you avoid the GC and do your memory management manually you at least have RAII so it shouldn't be any less safe than C++. D also includes some (experimental?) language features which aim to improve memory safety[1] but they're nowhere near as robust as Rust's type system.

[1] https://dlang.org/spec/memory-safe-d.html


Type safety: excellent, if you use @safe you have to tell the compiler you're doing nasty things with it. `cast` is a keyword so you can easily see where it is being done.

Memory: very good with @safe (I've not use rust so I can't compare), there are a few holes in the corners. @safe is not the default though, although it is transitive so if main is safe then everything else has to be @safe or @trusted.

Thread: not so much, we're working on it. You can do thread safe designs but it is ultimately on the programer to not stuff up.


I would say it depends. Maybe Walter will chime in, but from what I can tell it is fully possible to use raw pointers (-betterC) and step into unsafe space, but if you're using the full runtime it should be as safe as go, java, etc.


-betterC removes the druntime features, but pointer safety is gained with the -dip1000 flag (with @safe) regardless of whether you use betterC or not.

@safe will also stop you doing obviously unsafe pointer arithmetic unless you abstract it into a @trusted function.


Opt-in memory safety with annotating @safe. You pay for it if you want it.


Interesting. Nim is opt-out.


D had the wrong default, but now it is too hard to break everyone's code, hence why it stays like that.


Wrong default, perhaps for you, but I for one don't care about memory safety at all.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: