Developer here. I was going to post this here in a couple of weeks after launching the product and creating a separate site for the language with much better information about it.
I'd also like to hear your opinion about not allowing to modify function arguments except for receivers. This is an idea I got that isn't really implemented in any language I know of.
For example:
mut a := [1, 2, 3]
So instead of
multiply_by_2(&a)
we have to return a new array (and this will later be optimized by the compiler of course)
a = multiply_by_2(a)
I think this will make programming in this language much safer, and the code will be easier to understand, since you can always be sure that values you pass can never be modified.
For some reason all new languages like Go, Rust, Nim, Swift use a lot of mutable args in their stdlibs.
You can still have methods that modify fields, this is not a pure functional language, because it has to be compatible with C/C++:
I'm not sure if I'm the target audience for this (low-latency trading), but here's my thought - code which would allocate in a fast path is a strict no-go for me, and this runs fairly close to that in a few regards:
> It seems easy to accidentally make allocating code allocate by changing some variable names (b = fnc(a) - oops, allocation)
> I would be extremely wary of trusting a compiler to realize what sort of allocations are optimizable and not - I already have problems with inlining decisions, poor code layout, etc in fairly optimized c++/rust code compiled by gcc+llvm.
Replace allocation with any other expensive work that duplicating a datastructure would result in, and you have the same story. I suspect many of the people that would be evaluating this compared to C/C++/Rust have similar performance concerns.
You need to stop pretending they are not globals. Just accept you work on one continuous memory patch and are simply passing pointers around. If you don't lie to yourself, you can't shoot your own foot when the compiler do not see your lie.
I think Rust and Swift's approach of approach of making the user explicitly annotate (both at function definition time and a call time) these kind of parameters works pretty well.
I think you're right that it can often be an antipattern. But there are also use cases (usually for performance reasons), and the real problem occurs when you are not expecting the modification to happen. If the parameter is annotated then it's obvious that it might be mutated, and less of an issue...
P.S. Looking forward to the open source release. This langiage looks pretty nice/interesting to me, but there's no way I would invest time into learning a closed source lanaguage.
Technically (and we are technical folks) it's a language with a closed source reference implementation. A serious language designer is going to specify his language, so that other implementations, closed or open source, are possible if not available. C is the ur-example, there are dozens of implementations, some proprietary and some free.
We aren’t in a phase where languages have multiple implementations right now.
I can’t think of any well-known newish language (created in the last 10 years, say) with multiple implementations. Rust, Kotlin, Swift, Julia, Dart... any others?
Go might be a counterexample with its cgo implementation, but that was built by the same team and I have the impression (maybe mistaken) that it’s fallen by the wayside.
I don’t know if this indicates a really new language development style, with less emphasis on specification, or if it’s just a cyclic thing and some of these new languages will gain more implementations as they get more established.
Yeah, but no language of the ones the parent mentioned has a non-toy, non-personal-project alternative compiler.
Perhaps only Golang (go and go-gcc).
For all others, everybody uses the standard compiler. Even in Python, PyPy is not even 10% of the users.
Whereas in C/C++ and other such older languages there are several implementation (MS, GCC, LLVM, Borland, Intel) with strong uptake and strong communities/companies behind them.
Yes, go has a formal specification, which not only opens the possibility for alternative implementations, but, more importantly, allows for the development of tools like linters.
It's way harder to develop tooling for a language which is only defined as "what its compiler can compile".
This. Letting the user annotate it is better then enforcing a behavior that is adequate in some scenarios but awkward in others. For gamedev for example, generating copies at game loop is only acceptable for small objects like Vectors and if they're allocated in the stack for example. Even if the compiler optimizes it, it is better to express the intent clearly in the code.
Yeah , but i meant that a = multiply_by_2(a) still looks like it is copying things even if it isn't. Let it be immutable by default and mutable with a keyword.
In Ada, procedure and function arguments must be annotated with whether they are “in”, “out”, “in out”. “In” can never be modified, “out” has no initial value and “in out” has both initial value and can be modified.
Overall if you don’t know Ada I’d recommend that you take a look at the features of it. It has very similar design goals as your V.
Although you can certainly go the pure functions route, I wouldn't recommend it for performance.
There's a false dichotomy between functions and methods, which are simply (sometimes dynamically dispatched) functions with a special first argument. If you allow mutable first arguments, why not any argument?
Instead of the language deciding what's mutable and not, I'd rather have a const system like C/C++ to ensure that changes aren't happening behind the programmer's back.
Hello, this is an excellent language! Have been looking for something like this for a long time!
Re: "not allowing to modify function arguments except for receivers" -- maybe instead all fields const by default, but having something like an optional mut modifier?
A quick question, how does hot reloading work (with presumably AOT compilation and no runtime)?
(Perhaps there's a OS mechanism to change already loaded code in memory that I should know).
> For some reason all new languages like Go, Rust, Nim, Swift use a lot of mutable args in their stdlibs.
Both Rust and Swift require specifically opting into parameter mutation (respectively `&mut`[0] and `inout`) and the caller needs to be aware (by specifically passing in a mutable reference or a reference, respectively), only the receiver is "implicitly" mutable, and even then it needs to be mutably bound (`let mut` and `var` respectively).
Inner mutability notwithstanding, neither will allow mutating a parameter without the caller being specifically aware of that possibility.
The ability to mutate parameter is useful if not outright essential, especially lower down the stack where you really want to know what actually happens. Haskell can get away with complete userland immutability implicitly optimised to mutations (where that's not observable), Rust not so much.
[0] or `mut` in pass-by-value but the caller doesn't care about that: either it doesn't have access to the value anymore, or it has its own copy
I wanted to ask OP about this part specifically, so I’ll write my questions here.
a) how do you plan to do this?
b) will the optimization still kick in if you name the return value something else than “a”?
c) what if “a” is an argument passed to the function that calls “multiplyBy2”? Then doing an in-place update would modify the value of “a” for some other function that has also been passed “a” as an argument.
This is exactly what I was looking for. There's no real option for a language that allows interactive coding and it's easily embeddable. Please open source asap. You will get contributors starting with me.
I’m not taking anything away from the goals of this project but I do feel it’s worth mentioning that there actually are lots of languages that offer interactive coding and are embeddable. Eg JavaScript, Python, LISP, Perl, Lua, etc. Heck, even BASIC fits that criteria.
I do wish the author the best with this project though. Plus designing your own language is fun :)
All of them have shortcomings when one wants multi-threading.
Javascript , Python and Lua implementations are single threaded. Perl is slow. LISP is ideal but the commercial distributions with these features have licesing costs of thousand $. The open source (Common Lisp) implementations have all other deficiencies: not easily embeddable and/or big image sizes and/or slow and/or with poor GC. Currently the best open source option seems to be Gambit Scheme. I am playing with it and while the author is extremely supportive some points are still a little bit rough.
In the single thread world there is already a clear winner and that is Lua.
I actually think having different models (functional and imperative) just adds to confusion. I don't think immutability is all that useful personally, unless the language is purely functional to begin with. I'd keep it simple and stick to C as much as possible.
Why not go the other way and pass everything as a reference? After all that's what Java does, and that's how you'd pass any struct in C anyway. It's a rare case when I need to forbid the calling function to not modify an argument for whatever reason or because I don't trust it - in that case you can make a copy beforehand or use a const modifier. But in most cases I'd expect functions to modify the memory that I pass in, instead of allocating and returning new structures.
Why not have a simple `class` construct as in JavaScript? Keeping functions together in a class is very convenient and means you don't have to pass the struct as the first argument each time. That way `Array` can be a class, and would always be passed by reference. No ambiguity there, class instances are always mutable. Everyone is already familiar with it, it works.
A class method can simply map to a global C function:
```
ClassName_MethodName(*self, ...)
```
As an aside, using a syntax that people are already familiar with (and APIs!) would be great, and make something like this instantly usable. JavaScript has a fairly small core API which would be easy to emulate for example.
> Why not have a simple `class` construct as in JavaScript? Keeping functions together in a class is very convenient and means you don't have to pass the struct as the first argument each time.
This already seems to have a way to associate functions with data structures, in the same way that methods are done in Go, via a "receiver" before the function name
Eg.
type Something{}
fn (self mut Something) method() { ... }
I'm thinking that it's easy to make a mistake that would prevent the optimization from happening, so I'd personally much rather be explicit about mutability than betting on having satisfied the optimizer.
This looks like a really interesting project, and I look forward to trying it out!
I have a possibly unhealthy obsession with using namedtuples in my Python and so this pattern appears frequently in my code:
object = object._replace(foo=1)
But I usually encapsulate the _replace call in the class so it's more like:
object = object.update_foo(1)
I personally find it can make the code easier to understand, like you say, but it seems like you're getting a lot of disagreement from the other comments.
> I personally find it can make the code easier to understand, like you say, but it seems like you're getting a lot of disagreement from the other comments.
The disagreement they're getting is not on the use of immutability and pure transformations, it's on the fantasy that a "sufficiently smart compiler" would be able to optimise this (especially non-trivial versions of this) into mutations under the cover.
Furthermore, V is apparently supposed to be a fairly low-level language, if you have to rely on the compiler to perform the optimisation, can't statically assert that it does so[0] and it fails to, that is extremely costly in both "machine" time and "programmer" time (as you'll start wrangling with the compiler's heuristics to finally get what you need).
If you want immutability, do it, but do it properly: build the runtime and datastructures[1] and escape hatches which make it cheap and efficient, don't handwave that the compiler will solve the issue for you.
[0] and if you are you may be better off just adding mutation to the language already
[1] non-trivial immutable data structures are tree-based and translate to a fair number of allocations, you really want an advanced GC
What about the case of multiple outputs? It's traditional to have functions that take other mutable arguments to store different auxiliary return values in. So, with this proposal, you couldn't do that and would have to construct random blobs to store all return values and then unpack them.
That's a solved problem, the "random blobs" is called a tuple. Or, since the language is inspired by Go, you can have bespoke multiple return values instead of a reified generic structure.
I think it would be a bit weird if fields of structs can be modified, but bare values can't. Kind of feels like the inconsistency between `Integer` and `int` in Java.
So I would say that having to explicitly mark function parameters as mutable (like in Rust) is a better approach.
Can it really always be optimised by the compiler. For example, I imagine optimising `sort(&arr)` which cannot mutate arr could be quite difficult, no?
To detect whether something can be mutated in place will require static analysis to see if there are aliases or pointers to the data. If this is an optimization that's based on whether something is safe to mutate in-place, you'll run into the problem where performance becomes different depending on whether something can be optimized or not. For example, adding "x" makes the sort call suddenly perform worse since the compiler sees that it can't mutate "a" in-place.
This is assuming that you allow multiple aliases to the same data. The reason Rust has lifetimes and borrowing is precisely to be safe about mutation. Rust wouldn't allow sort() to modify "a" in-place in the above code.
Unless `a` is a linear value, somebody might have a reference to it, so you can't just sort it in place under the cover. The entire thing is useless if you looks like you don't have side-effects but the compiler visibly breaks this.
And you probably want to pick one of sorting in-place and out-of-place.
Yeah, as a rule of thumb I'd say "If Haskell doesn't already do this optimization, find out why."
I say "rule of thumb" and I mean it that way. Sometimes there will be Haskell-specific answers. But if your programming language has less code metadata available than Haskell but is promising more optimizations, it's worth a cross-check. I agree with you strongly in this particular case; without type system support for this specific case, it's going to be very hard to optimize away things like a sort. You start getting into the world of supercompilation and whole-program optimization, the primary problems of which for both of them is that they tend to have intractable O(...) performances... but something about them makes them seem intuitively easy. I've seen a number of people fall into that trap.
(I haven't personally, but I understand the appeal. My intuition says they shouldn't be that difficult too! But the evidence clearly says it is. Very, very clearly. We're talking about things like optimizations that are super-exponential in complexity, but seem intuitively easy.)
I assume `a` would need to be copied into the local scope of the function and the optimization would be to elide the copy after analysis shows the original a is safely consumed by the call site so it does not require persistence.
This probably means lots of aliasing restrictions or in the case where the optimization can't be done, copying could be an expensive side effect of an innocent refactoring.
I hear Swift uses something like this, though it's copy-on-write. I've not used Swift in any significant capacity. Does anyone else have experiences to share with this model?
I don't think copy-on-write will prevent a copy here, since the copy is being written to inside of sorted. I don't think the compiler is smart enough to elide the copy, either.
It definitely can and does work for similar situations in other languages. It’s fragile though as aliasing accidentally somewhere or adding other kinds of uncertainty around which values are returned makes sinking the return allocation into the caller, a required prerequisite, much more likely to fail.
A good way to imagine this is having return value optimizations give you a copy which is placed on the original which allows the work to be skipped. But that can require a whole lot of other trades around calling conventions, optimization boundaries and so on. C++ has dealt with some of this complexity recently but it’s nuances too years to sort out between standard revisions and only became required in some cases rather than legal until after compilers had plenty of time to work on it.
Yeah, I don't doubt it if Clang can do this optimization for C++, but I don't think the Swift optimizer is quite there yet since it needs to peer through many more layers of complexity.
> V is compiled directly to x86_64 machine code (ARM support is coming later) with no overhead, so the performance is on par with C.
Direct compilation to x86-64 machine code does not get you performance on par with C (by which I assume the author means GCC or Clang). The optimization pipelines of GCC and Clang have had decades of work put into them by some of the best compiler engineers in the world.
Since the author states that the compilation time is linear, this would seem to imply that a full suite of optimizations are not being done, since many optimizations done by GCC and Clang have nonlinear complexity. It is easy to get fast compilation if you don't perform optimizations.
> - Thread safety and guaranteed absence of data races. You no longer have to constantly ask yourself: "Is this thread safe?" Everything is! No perfomance costs either. For example, if you are using a hash map in a concurrent function, a thread safe hash map is used automatically. Otherwise a faster single thread hash map is used.
This description doesn't guarantee freedom from data races. (Java's memory model basically fits this description, for instance, except for the specific case of hash tables, which aren't built into the language.) Even if it did, the tricky part is determining what a "concurrent function" is. The obvious ways one might imagine doing this tend to fall down in the face of higher-order functions.
Yes, you are right. I had a mental note to update the description, I never expected this to be posted on HN so early :)
Just updated it:
> V is compiled directly to x86_64 machine code (ARM support is coming later). There's also an option to generate C code to support more platforms and achieve better performance by using sophisticated GCC/clang optimization.
GCC/Clang will definitely optimize better. One way to piggyback on those is for V to spit out C code and let GCC/Clang to do the hard work to produce production mode code, while V can still do fast compilation for development mode code.
My two biggest questions about V are 1) How is memory managed?, and 2) How is concurrency done?
"V has no runtime". No GC, but you don't have to manually release memory, like Rust but much easier. Sounds great. How?
And "no race conditions ever" and "everything is thread safe". You can do that with "no runtime" fairly easily if there's no goroutine-style concurrency. I didn't see any mentioned, but I could have easily missed it.
Those two aspects of the language are fundamental enough that I would certainly want to read about them near the top of any overview of the language.
Good question. I haven't mentioned memory management because it's not done yet. I know for sure there won't be a GC or reference counting.
I want to do something similar to Rust's approach, but much much simpler. It's not an easy task.
Right now the language handles very simple cases. Small strings are placed on the stack, local variables that are not returned are clean up automatically.
Globals are not allowed, function args can't be modified, so that helps a bit.
No GC, no reference counting, no manual memory management... Hmmm.... Not everything can be managed by RAII/SBRM... Let say I have a function which loads a complex document (like spreadsheet), does some changes, saves the document and then exits. Who will dispose this complex, dynamic document from the memory? This is the CORE question! If there is a no GC, no RC, no manual management solution to this, then I AM REALLY INTERESTED to know about it...
Yes, which is why I asked. He seems to be saying it will have the ease of GC with the performance of fully manual memory management with none of the costs of either. I don't see how radically simplifying Rust's approach can do that, but I don't have to. If he can find a partial solution that is significantly better, that will be great. I don't know if he'll succeed, but I'm rooting for him.
To me the central concurrency scheme is one of the defining features of a modern language. Go and goroutines, ES/Node and event-driven paradigms, Java and threads... That does not mean a language can't handle many types of concurrency, but it's good to be opinionated on a preferred concurrency scheme from the get-go and have native methods for dealing with intercommunication (ie. Go's channels).
I'm liking Go's built-in, lightweight goroutine approach a lot more than the other "afterthought" approaches, but I think you need a runtime to dynamically allocate the goroutines and adapt flexibly. I don't want to write one myself or link to some library. If V doesn't have some goroutine equivalent (or better) built into the language, I probably won't be persuaded by its other features. Sending bits of code off to do their various jobs concurrently with (almost) the ease of calling functions sequentially would be hard to give up.
Making a programming language specifically for the needs of one program, then developing that language and the program together, is an underestimated strategy. I believe the world would be more interesting if more people applied it—because then we'd get more qualitatively different new systems. The Sapir-Whorf hypothesis may not be something people currently believe about natural language but for sure it's true about programming: the language you program in conditions what you think, which conditions what program you write. When the two evolve together, evolution can go to new places.
This strategy is time-honored in the Lisp world, where making a language and writing a program are more intertwined, and the cost of making a language much lower, than they usually are.
The downside to this is that unless your language gets widespread adoption, you can corner your community into a bubble and drastically increase the barrier to contribution.
I think one of the issues of Gnome Project is this, they use a language specifically designed for Gnome/GTK+ and as if that was not enough, it has two different syntax, one--Vala which is sort-of-kind-of C#-like and the other being Genie which is sort of kind of python-like.
> unless your language gets widespread adoption, you can corner your community into a bubble and drastically increase the barrier to contribution.
Only if your language (and standard library) is of comparable complexity to that of widespread languages. C has more quirks than it lets on, C++ is a monstrously complex beast, Java has an enormous standard library…
If however you keep the syntax and semantics of your language simple, they can be learned in a matter of minutes. If you keep the "standard" library focused towards your application, it won't require more effort than any other regular application.
The real problem with custom languages, I think, is that very few programmers can actually write one. Most others don't even see the need, I think in part because of motivated cognition (If I delude myself into thinking I don't need something, I don't have to face the fact that I can't do it). Though the main problem is probably education: we are taught that languages are chosen, not made. As for how they are made… well, that's the realm of geniuses who have way too much time to spare.
Every programmer should have an introduction to programming languages, and we need more specialists who can whip up a DSL in a couple days. Our craft would be very different (and I think much better) if we did that.
That would certainly be a good way to do it (though I reckon the limit between a mere library and an internal DSL is a bit fuzzy). Do you however have any widespread powerful language in mind? I don't know of any. Even all of them combined probably still doesn't count as "widespread".
Also, once you understand the problem space well enough, a custom syntax can be a nice bonus.
Elixir allows you to build powerful DSLs within the language using macros and optional syntax. Also, if you write CSS well you end up with a DSL describing your screens. What other languages give you this power?
To add to the chorus, most ML dialects, Forth, Ruby, Self, Smalltalk, Python, Mozart/Oz, D, Julia, Lua; to a significant extent Perl, PHP, Nemerle, Octave, R, Javascript, shell, even venerable old SNOBOL.
Hence why making a new application-specific, general purpose language is probably not the greatest idea vs building a DSL within one of the languages you mentionned.
Simplicity is undervalued quality. It's far simpler from the point of view of dependency management, that if one is developing a large codebase in C++, to also have the scripting language be native to that environment, rather than to first create a DLL of the C++ codebase, then implement the FFI to it using e.g. SWIG, PINVOKE, or such, to call the C++ context from one of these "premade DLL" substrates.
It's fairly trivial to expose C++ libraries to any other language context, IFF the library is wrapped to a DLL that respects the C ABI. The reverse is not true. You need to have some message passing mechanism in that case (yes, there are several not-so-hard ways to do that but that again, is added complexity).
Yes, and building a library (a DSL) in an existing powerful language that can do it (and there are many as people said in this thread) for exactly what you need, is _vastly_ simpler than making a new language toolchain (which linker? which codegen? which typing?).
I agree - choose the best tool for the job. As a counterpoint - Implementing an interpeted macro-less lisp is a few thousands of lines of code in C or C++. So if the goal is just to bootstrap some language, the effort is more about typing than design or innovation (just copy the interpreter in chapter 4.1 of Structure and interpretation of computer programs).
A stack based language like forth would be even less work.
(Lisp) "dialect" seems to refer to a particular implementation/variant of Lisp, e.g. CommonLisp, Scheme, Racket, Clojure, MacLisp, NewLisp, etc. rather than a domain-specific language built in a Lisp.
When it last showed up here in HN, iI was very intrigued by it. However, it promises a lot of stuff I think many here have heard other projects make. It seems very ambitious. Documentation isn't the best, which makes it harder (especially due to different syntax). Overall though, I am still a little skeptical it can keep it's promises. (With that said, I hope it does: it looks like it has a lot of potential)
That depends heavily on what the transpiled C looks like, machine-generated code can be pretty human-unreadable. In particular you generally lose all comments, you can end up with autogenerated variable and function names, huge functions with completely unidiomatic C code etc...
I agree that modifying Chicken's output like you would handwritten C code would be an exercise in madness. But it's possible to design a language to have a nice transpilation story, for example PureScript deliberately puts restrictions on the names of types and operators so that they have a sane name in JS. Of course there's still a likely style mismatch, but what can you do?
I suspect you already know this, but for the benefit of others.
Erlang came about like that, starting from Prolog with a telecoms program and new language in mind, and developing both together. Prolog (I learnt recently) is surprisingly Lispy, e.g. data and program in the same form and easy for the program to change, everything in lists, and very FP - recursion for loops, non-mutating variables etc.
I don't how many other languages started like that. PHP for one. Probably a lot.
That was true up to PHP5, where a Java-like, saner subset of the language with stronger typing became available.
The genius of PHP was in the code delivery model and the way you could intermix HTML and code in a single page to be served by Apache.
That democratized Web programming and programming at large. The rest of the language was bad, but it didn't matter for success (see "worse is better", where "better" means fitter for the market).
If you’re saying that PHP was a useful, Turing-complete templating language, I agree. I don’t think it was well-suited for large code bases with lots of business logic, and these days I’m horrified by the idea of a Turing-complete templating language in the first place.
Turing completeness doesn't bother me as much as all the security issues that existed by default (some of which are still present to this day).
I wish programming languages made a type-level distinction between strings that are and aren't tainted by user input. That would make it so much harder to accidentally introduce injection vectors.
I totally agree that PHP was not well suited for writing large and complex programs, but its ubiquitous availability as a deployment platform made it an interesting target anyway.
The easy deployment also made it easy for people to learn it by themselves, and consequently there was a large pool of programmers who knew PHP, making it an interesting language from a hiring point of view.
And that's how we ended up with piles of crappy, proprietary and OSS PHP code in the wild. The quality of the language is only a minor factor its success.
I was just trying to think of (well-known) languages written that way. I guess there've been just as many dead languages not written with a particular program in mind.
Not necessarily. It's easier in Lisp, and a decades-long tradition, but Lisp is not even the only language of which that's true—there's also Forth. And there are many opportunities to apply this approach beyond those two. The more the better, because each qualitatively different starting point will lead to qualitatively different systems.
The software ecosystem is not well served by everyone being focused on the same few familiar programming approaches. Sure there are economies of scale—better tooling, programmer fungibility—but there is less intellectual diversity. More people should realize the tradeoffs here: yes you lose a lot when you start making your own language, but you also gain a lot. The program you're trying to write becomes more writeable. If that doesn't sound significant, it's because we're so conditioned to think the other way. There's another advantage, too: it's deeply intellectually rewarding. That provides staying power to work on significant projects over the long haul and is a solid motivation to do something many people think is crazy.
Really you can write a DSL in almost any language. I've worked with Java and TypeScript Selenium testing suites that were so project specific with everything abstracted out to methods so much you could barely tell what language you were writing, just that it was some C-descendent
> The software ecosystem is not well served by everyone being focused on the same few familiar programming approaches. Sure there are economies of scale—better tooling, programmer fungibility—but there is less intellectual diversity.
Then it seems that the problem has more to do with political economy than with programmer culture. Capital and managers want programmers to be cogs, not artisans.
Can’t expect programmers to change this political problem, either. Especially considering that the nerdier the technologist, the less political he or she is.
Writing a simple TDD kata in OMeta was really mind opening.
That and Maude is another fascinating language. Pure term re-writing and explicit definitions of all data structures seems almost like a higher level of abstraction or programming.
I think for specialized projects it makes sense and I can think off the top of my head of two: Godot and Processing. Godot is a game engine that is highly extensible, and has 1 main scripting language reminiscent of Python and another that's for lower level patching of other languages into the engine.
Processing on the other hand is more of a way for visual people to get into programming and do visual representations of data or art. A matter of fact the demo for V / Volt made me think of Processing and finding out that it can be cross platform made me wish Processing would produce native binaries like this that compile down to very small executables whilst still maintaining the same syntax (which is very Java-like, but not entirely necessary to reproduce a full on Java-like language for this purpose).
I totally agree, for certain projects it makes a lot of sense to build your own language. I wish those mainstream engines would stop embedding C# (and I love C#) and make something much more creative. There's the not-so-mainstream engines out there too that do implement their own languages like GameMaker and friends. DarkBasic also comes to mind.
What a terrible idea. I'm surprised I need to spell out that the benefit of learning a language is that you can actually use it, and the effort-reward ratio is far better if you can reuse it widely. Plus a language is far more than its syntax and compiler - its the ecosystem that makes a language productive. The last thing we need is for an explosion of half-implemented, poorly supported toy languages for individual programs.
I disagree. A cambrian explosion of 'toy' languages is exactly what we need so that the few that become popular reach the maturity you desire.
I have this itch that there is an explicit slot in our native language infrastructure for a scripting language that is a bit less clunky than Lua, not so huge as Python, and more ClojureScript like than the other embedded lispies out there, that is trivial to embed, has a fantastic interface to C++ and is utterly pragmatic yet elegant.
I agree on principle. However, in the comment above I was thinking about a scripting language that is trivial to bolt on to an existing native program, by including just a few header and source files. Like Lua, Duktape, etc. Or how sqlite is distributed in amalgamated form.
I agree but at least some of the ecosystem concerns can be mitigated if you choose the right technologies to build your language. For example Eclipse Xtext lets you design languages that can easily interact with Java code and tooling, and it provides a lot tooling support like build integration in the IDE, syntax highlighting and code completions without much effort. If you keep the scope of the language small and treat it like any other module in your projects than its not hard to maintain.
Meta-circular composition forms a powerful design technique with more than just algorithms and code. The philosophy of affordances, fault prevention, fault recovery. Self similarity helps with sourcing, alignment, jigs, etc. Many design patterns in code have analogs in other forms of engineering.
Creating a language is in an orthogonal dimension to the plane that Sapir-Whorf lives on. The realization that you can create and utilize constructed languages, not just the given languages, to solve problems. A computer programming language is a meta-tool, one uses the language to shape and reify ideas. So a computer programmer is a meta-tool user. Someone who creates computer languages is a meta-tool creator. And they are designing an idea that is congruent mathematically, mechanically (can we compute it) and mentally for a human. Maybe my version of the Turing test is, "design me a computer language to represent and solve X"
> [...] developing that language and the program together, is an underestimated strategy. I believe the world would be more interesting if more people applied it [...]
I find this applies to most development, once you muddy the distinction between program and language you can see how the concept applies to other things - that is, it's a continuum, and artificially and prematurely locking down the design and implementation of a lower level piece before exploring the landscape can hurt... though determining what is "premature" and what is not is of course pure intuition.
> Making a programming language specifically for the needs of one program, then developing that language and the program together, is an underestimated strategy. I believe the world would be more interesting if more people applied it—because then we'd get more qualitatively different new systems.
One example of this is video game designer and programmer Jonathan Blow, best known for creating the game Braid.
A couple of moments in that video which I would like to highlight:
At 11:15, responding to a question from chat:
> Is there a specific reason the game is kind of in top-down 3D?
> Because that is what helps the game mechanics that we need for this game. If it was in first-person (laughs)... it would be kind of amusing... maybe we should do that (smiles) that'd be fun.
At 14:05, responding to another question from chat:
> Do I find the language scaling, well, now that I am dealing with a more complex program?
> Yes. I've been dealing with a program this complicated for like five or six months so this is nothing new.
At 14:15, responding to another question from chat:
> Do I have my own shader language?
> No, right now we are using OpenGL and [unintelligible] GLSL.
The answer to this third question indicates to me that he is focusing on getting something that works better for him than pre-existing alternatives without getting bogged down too much that every side of his language needs to be perfect right off the bat.
In other words, he is balancing the time to perfect his language against the time he wants to spend making games rather than spending 100% of his time working on the tooling and not getting to write any games.
And while we're on the subject of people making their own languages to make games, see also Andy Gavin.
He made Game Oriented Object Lisp (GOOL) for himself and the other people at Naughty Dog to use when making Crash Bandicoot for the original PlayStation.
He later made another language, Game Oriented Assembly Lisp (GOAL), for the Naughty Dog game Jak and Daxter: The Precursor Legacy for the PS2.
I am just saying it seems like a good strategy. The mention of Braid was just to give some context about who I was talking about since I think at least more people have heard of Braid than the amount of people who knows who he is if you don’t state that he made Braid.
> I believe the world would be more interesting if more people applied it—because then we'd get more qualitatively different new systems.
Interesting, yes. More productive, not sure.
It's basically quality versus quantity, or in the words of Stalin: "quantity has a quality all its own".
So far scaling horizontally (more people working together) seems to fare far better than faring vertically (one or a few people being way more productive alone or in a small group).
Awesome, I love to see new programming languages in action. This is great. Some thoughts.
First, ignore negativity and focus on getting constructive feedback. One of my tiny regrets is that I abandoned one of my projects due partially to negative energy. Many years ago, one of my projects was shared here ( https://news.ycombinator.com/item?id=226480 ), and the feedback was kind of a buzzkill (especially since I wasn't the one sharing it).
Second, think about the growth you want. While I could ignore the buzzkill and keep the faith, I used my language to put a real product out into the world. The crazy thing is that I got it working and working very well, but when it came to hire. It was a cluster fuck. I should have spent a bunch more time on documentation and examples, but I had other concerns that were higher priority. I ultimately had to abandon the whole thing, and I just rewrote everything in C# and used Mono. It was painful, but the company was able to grow faster since the tools were somewhat standard and a plethora examples for the new hires.
When I look back, I was onto something. If I had kept the faith and pushed through, then I would have created something very similar to HHVM which Facebook uses. My strategy back then was to create a less awful language, improve it, then port the platform bits to a better ecosystem and preserve the "business logic".
My core advice with the programming language side of the house is to find a partner for you to lead/follow with shared values. Make it open source as soon as possible, don't wait.
I thought it was a new name for Zig seeing your name and the description in the link. ;) It actually looks about too good to be true, especially that it can handle any C++ to V conversion. I was pushing people to check out languages like ZL exactly to rid us of C++ without throwing away legacy code. If it can do that and fast compiles, I can't wait to read the full write-up later on.
Btw, I encourage you to keep at Zig for diversity in systems, language space. Plus, macros. I tell people to avoid them by default for more maintainable code. However, there's times where it's better to have them than not have them. I was happy to see D, Rust, and Julia do macros. Zig and V should have them, too, for max productivity.
To amedvednikov:
1. The name. Although Kesterel had a V language, that was long time ago. You're not stepping on anything. I just encourage you to do one people can spell and pronounce easily that isn't already taken. That will make both search results and adoption a little better.
2. Macros. Like I said above. I saw you mention Go which intentionally tries to keep a standardized language for maintainability and easy compilation. I get that. You could add a warning to the main page that macros are available but discouraged for most situations for those reasons. "Use them only when the cost is worth it." Can just do two passes: one for macros, one for regular code. Your incremental compilation should knock out most of what little slowdown there is.
I'm against macros. I've done a lot of research on this topic.
One of the main goals is simplicity and maintainability. I want people to be able to jump into any code base (including stdlib and compiler) and understand what's going on. Macros don't help with that.
I went through like 10 names and ALL of them were taken. There are a lot of programming languages out there :)
I don't know of any language with macros that have proper IDEs that do more than syntax highlighting. One of the reasons languages like Dart, Java or C# have such amazing IDE capabilities w.r.t. refactoring is that they don't generate half of their code during runtime...
I agree. Macros only help the programmer. They significantly decrease code readability imo. Code readability is more about how easy it is to understand what is going on than about how much stuff there is to read.
Max productivity to the library implementor not to the integrator or maintainer. The number of hours I've spent inlining gross macros so I can debug them has poisoned me against all but the simplest application of them.
I said the same thing in my comment. You get max productivity with selective use. Nine times out of 10 you don't need them. Overuse of them led to LISP code being hard to read like you said. We don't need to repeat mistakes of history. So, I said add them with a warning to minimize them for maintainability if language is trying to be like Go.
On the other hand, they should be most of the code if taking a DSL- or MOP-like approach. The people using those will be very familiar with the higher-level language. There's a solid, lower-level language underneath for when the VHLL's don't work out. The macros help there.
> Overuse of them led to LISP code being hard to read like you said
I don't think that's the case.
That poster your comment goes to claimed that there are types of macros which make debugging harder - not code reading.
Debugging code which makes use of macros is more complicated than code without. There is no doubt about it. One part of it is that debugging happens in a different place -> code transformations with side-effects are running in a compiler or with interpreters at runtime.
First, it's Lisp not LISP. Using "LISP" immediately flags you as someone with a superficial (if at all there) understanding of the language.
Second, unsubstantiated proclamations like "Overuse of them led to LISP code being hard to read" reinforce the previous point. Could you provide a clear reference where Lisp macros are considered "mistakes of history"? Clear references where overuse of Lisp macros turned out to be a problem?
I'm curious if you've ever used a Lisp development environment with facilities such as interactive macroexpanders or if you're just assuming things based on your (incomplete, suspect) understanding of the domain.
>First, it's Lisp not LISP. Using "LISP" immediately flags you as someone with a superficial (if at all there) understanding of the language.
Actually both versions are valid.
You seem to not know historical information about Lisp/LISP. While Common Lisp is spelled "Lisp" and more modern use is Lisp, historically LISP has been prevalent (and tons of Lisp dialects prefer the capitalized version, e.g. like "fooLISP" or "barLISP").
Second, you are concerned with superficial details people don't and should not care about. We're programmers, we care about the code and what you can do with it, not about whether some language is "properly" spelled in caps of mixed case.
Third, you are rude, which is worse than both of the above.
I learned about it from old books, often on AI, I could scrounge up when I didn't have the Internet or a computer. LISP as in LISt Processor. An acronym. Due to broken memory, I sometimes forget which term to use on stuff that's faded away. I end up about randomly using LISP or Lisp unless its Common Lisp where I usually see "Lisp" in write-ups.
There are lIsPs like Clojure that argue (> data functions macros). Although I mostly disagree (for example I love Racket macros), I understand and appreciate the sentiment. I have heard it from other people who have worked with a variety of LiSp code bases over the years. TL;DR: Any form of non-trivial DSL needs supporting materials like documentation and a simple, clear design.
Although it is nice to be able to expand macros, fully-expanded macro-generating macros are clear in approximately the same way as assembly language. It is impressive if you can navigate that, but even more impressive if can manage not to need to do so.
Clojure argues that (> data functions macros), but it still has macros and accepts that they are not only useful, but sometimes necessary. Clojure's core.async would have had to be built into the compiler, if it wasn't for macros. Just because it prefers data to functions and functions to macros, doesn't mean that it doesn't recognise the importance or usefulness of all three.
> lIsPs like Clojure that argue (> data functions macros).
That's not really Lisp related. It's more like how the community likes to see code being written.
For example I would regularly make use of macros to provide more declarative syntax various programming concepts.
There are Lisp dialects which are light on macros and some which are using macros a lot more. For example the base language of Common Lisp already makes use of many macros by providing them to the user as part of the language (from DEFUN, DEFCLASS, ... INCF, upto LOOP).
Seems like you are going for a two-staged language with homogeneous metaprogramming. As opposed to heterogeneous metaprogramming (macros). I like it, I must say.
> 100% overlapping with what I'm trying to do with Zig.
I don't see any mention of meta-programming for V, which seems to be a big emphasis for Zig? This seems like a massive feature that at least I'd care about. I haven't personally had the opportunity to try Zig, but I'm routing for you, so I hope you keep going with the project.
I'm pretty sure none of V will deliver on its promises. Wouldn't be surprised if nothing came out of the announcement at all. Author seems to have a history of big claims and vanishing shortly after. Also details on implementation specifics here in the comments don't make him sound like someone who has an idea of the internals of a compiler or programming languages.
That's what I referred to. His comment history looks like he is advertising his chat apps all the time. One comment pointed out, that eul.im was 4MB in size but loaded a browser runtime upon initial start, also he never opened the source. I'm not saying that he is a scam or anything but that product page makes some very huge claims and has little to no proof for V's existence. So I think it is too soon to celebrate the new GOAT in the programming language game.
The claim that the language is guaranteed thread safe without data races is also unsubstantiated. Avoiding data races, and more importantly, dead and live-lock, are incredibly difficult problems to solve.
V looks interesting, but I'll wait and see before I jump onboard.
Very interesting. What I'm curious about is how this language is compiled, the implication seems to be that it gets translated to C/C++ which seems to overlap a lot with what we are doing with Nim :)
It seems to be an option to compile to C for platform support, but the page also says:
> V is compiled directly to x86_64 machine code (ARM support is coming later) with no overhead, so the performance is on par with C.
Which is interesting. There is a lot more information required still but direct may also imply that it's doing the actual instruction scheduling too rather than relying on LLVM. I'm looking forward to hearing more.
I think /lang may be out of date from the home page. Here's what's on the index:
> Is Volt open-source?
> Not at the moment. Due to several reasons, right now the development model is similar to that of Sublime Text. The app is going to be open-sourced in 2021, so you don't have to worry about it being abandoned.
Edit: Just in case people miss the responses, my apologies, this quote may be referring to Volt the app, not the language (V).
That refers to the Volt app, not the V language. According to the website, the V language will be open sourced in 2019, and Volt (the app) will be open sourced in 2021.
It's at least plausible they could open source the language without open sourcing the app at the same time. So they're not necessarily in contradiction.
* There's no null and everything is automatically initialized to empty values. No more null reference crashes.
* Variables are immutable by default and functions are partially pure: function arguments are always immutable, only method's receiver can be changed.
* Thread safety and guaranteed absence of data races. You no longer have to constantly ask yourself: "Is this thread safe?" Everything is! No perfomance costs either. For example, if you are using a hash map in a concurrent function, a thread safe hash map is used automatically. Otherwise a faster single thread hash map is used.
* Strict automatic code formatting. It goes further than gofmt and even has a set of rules for empty lines to ensure truly one coding style.
Especially eye catching is the 2 mode of every data structure. Switch to thread-safe if there are concurrent access.
For no null is he saying that everything is just initialized to a default value (int=0, str=“”, etc)? I’m thrown off by, “everything is initialized to empty values” because I don’t see how empty and null are different.
Garbage but valid values are, in my opinion, much harder errors to catch because they can silently corrupt data, than a simple crash/null pointer exception.
I'm not a fan of null (option types seem better -- which V does say it has), but defaulting to an "empty" value isn't the answer IMHO as it makes it much harder to debug by obscuring that there's a problem at all. You may not realise that the 0 is actually an "empty" and not a valid 0 until you realise all of your calculations are wrong, months later.
“No variable shadowing” might sound great when you’re thinking about that one time someone confused you for three seconds with the addition of a new inner variable with the same name as an outer variable. But once you realize that it equally means forbidding the addition of a new outer variable with the same name as an inner variable (even though those inner variables are supposed to be implementation details)—and, as a corollary, you can never add new builtins to the language without breaking backwards compatibility—you’ll realize that most languages allow shadowing for a reason.
I think you're confusing "keyword" with "reserved word". A keyword has a special meaning in certain circumstances, while a reserved word cannot be used as an identifier. In some languages, those are the same thing.
For example, "goto" is a reserved word in Java, while not a keyword. Inversely, in Fortran, keywords are not reserved names, so `if if then then else else` is valid. C# has "keywords" and "contextual keywords", both categories being keywords, but only the former being reserved names.
Regardless, a builtin might be referring to a function that's included as part of the base language - like make() in Golang or zip() in Python.
Eh, right. But correcting for that I still hold the same view that adding new builtins should be considered breaking backwards compatibility.
EDIT: Since someone has gone through the trouble of downvoting this viewpoint, I would like an explanation as to why I'm wrong here. I cannot imagine a scenario in which doing so would not likely break existing code.
If you allow shadowing then surely the local definition of the name will take precedence over the new name introduced in the stdlib (or wherever), and thus the program will keep behaving how it did before the new builtin was introduced.
I think I must be misreading you, because it sounds like you're arguing for the idea that adding new names to an API should be considered a breaking change? I'll accept that maybe reflection will catch those changes, but with that exception why would any code even notice?
Those are exactly the ideas I had for what would make a perfect language! (Assuming they're implemented properly of course).
Simple features. Immutable and pure by default (but not dogmatically so). Fast compile. Hot reload. Automatic C interop. Fast-ish. Built in niceties like hashmaps, strings, and vectors (niceties compared to bare C). Receivers so you don't have to do the song and dance you do in C to tie structs and functions. No header files!
Go came close, but no cigar. Rust added the whole kitchen sink and loads of accidental complexity. Anxious to see how this fares...
It's interesting that the roadmap of volt has been saying v1.0 is just around the corner for the past half year. The other roadmap items also don't change much.
It would be great if the roadmap contained realistic items. Once a user is burned by an unmet expectation he won't believe anything else on the website.
please don't take this the wrong way but I'm almost more excited to read blog posts about your process and what you've learned than I am for the eventual product, or language used to create the product (though I am excited for both of those!). reading experiences people had trying crazy new stuff is more interesting than results of trying crazy new stuff imo
When you figure out how to do a good job with estimations that can be another post, because I still haven't figured that out. It's way easier to reason about programming language semantics than to guess how long a reference implementation will take.
Yesterday, I was in my Applications folder and deleted an old version with an "ahh too bad this never lived".
Now, a new story. Thanks, can't wait to read more about it!!
I think doing away with global variables is not a very good idea. While using globals is usually a bad idea, there are many instances where globals are appropriate (at least for languages supporting mutability). People who say they never use globals usually do use globals and are just trying to convince themselves they are not because they heard they were bad from somebody.
The entire Spring framework is IMO an elaborate construction built so that engineers could use global variables without their managers finding out. There is little to no difference between carefully using global variables and Spring dependency injection except syntax.
The best solution I have ever seen to global variables is definitely parameterize with Racket (https://docs.racket-lang.org/guide/parameterize.html). I don't think Racket was the first language to come up with this, but it was the first one I am aware of. The basic idea is that you define some global with a default value. However, you can call parameterize to change the value for the duration of some input function. It is made thread safe by using thread local memory. It then resets the parameter back to the default value at the end of the function.
On the other end of the spectrum, I think Rust also has a very good implementation of globals. It will let you use global variables, but you have to declare it as mutable, use some form of locking, or use an UnsafeCell. Additionally, you have to mark your code as unsafe any time you try to read or change this global variable.
I find myself sometimes wishing that Rust genuinely didn’t have globals. There are some things where it’d cause pain, and some places where I’m not sure what you’d replace it with (lazy_static, for example), but I find that Rust actually makes it too easy to do globals and thus have unexpected side-effects (e.g. command line arguments, environment variables, working directory—I would genuinely prefer these things to be passed to main for me to use at my discretion), although in most places the culture is against using them, which saves it from being too much of a problem. Yet still, strict functional purity can open up some delightful optimisations and avoid various bugs, just like how putting error handling into the type signatures helps clarify things and avoid all kinds of bugs. I’d like to see what Rust would be like with even fewer, or no, globals.
> The best solution I have ever seen to global variables is definitely parameterize with Racket (https://docs.racket-lang.org/guide/parameterize.html). I don't think Racket was the first language to come up with this, but it was the first one I am aware of. The basic idea is that you define some global with a default value. However, you can call parameterize to change the value for the duration of some input function.
Yes, it's called dynamic scoping, and for a long time it wasn't believed that the other option (what we call lexical scoping today) could even be implemented efficiently.
> I don't think Racket was the first language to come up with this.
Racket's parameters are just dynamically scoped variables. Most Lisps have them. The older Lisps actually predate lexically scoped variables and had dynamically scoped variables exclusively! Emacs Lisp is not even that old and only had dynamic scope until relatively recently.
> Racket's parameters are just dynamically scoped variables.
More or less. Racket's parameters also work with multiple threads and cooperates with continuations.
In Scheme it is common to see `fluid-let` to handle classic dynamically scoped variables.
I think most Common Lisp implementations that provide threads, make dynamically-scoped variables (called "special variables" in Common Lisp) thread-local.
Going a bit more abstract, it's a good idea to do this with mental models, too--the programming languages of the mind. Many of us are heavily burdened in our problem-solving long before we fire up our text editor, because our mental model for exactly what it is we are doing is extremely sloppy and rich with unnecessary dependencies.
A lot of people continually tinker with mental models that are into the Gigabyte-equivalent range in terms of all they intend and promise to do. One example here on HN might be the "startup" model. What "it" is seems pretty fluid at this point and in various discussions it gets mashed and molded to fit this concern or that one. Better models will come along that will solve problems nobody can yet put their finger on. (I'm speaking in the abstract here, but I've experienced and worked heavily on this kind of model-change and it can be very valuable.)
What typically happens is, someone comes along and isolates an issue which promises high leverage or high controversy or both, brings a set of problems into really sharp relief and remains in the needed context without the burden of supporting and interlinking with every other context out there, and voila--a powerful solution emerges in a very efficient way. Pretty soon everybody who needed a [startup] mindset now needs a [successor-lens] mindset. And not just in name--it's clear that this can really help. It's good stuff.
It's really just more of what we call "technology" and is observable in the same sorts of curves, but again, there's a model that's overburdened--the technology of the mind still overlaps with and rubs against what we consider "true" technology of the "useful arts" sort. As a civilization we suffer, mostly unknowingly, under the burden of yesterday's thinking about how things fit or don't fit into which categories.
There are many computer environments beyond the desktop and cloud servers. Arguably most computer environments.
But to reduce it: imagine an O&G pipeline controller that stupidly did something bigger than QNX & C. That will be pumping oil and gas for 30 years. Online upgrades, until some young turk blows out the library size. Oil spill with a blown line, and New Jersey explodes.
Now that I have experience with translating C/C++, it would be really cool to translate existing Android Java apps to V. This would talk more than a year probably...
Sounds too good to be true. If this is released it'll be serious competition to Rust, Nim, Zig, etc. Lets hope for the best. There's just so many amazing features. There's even a graphics library in it.
How exactly is this a serious competitor to Rust? The guarantees volt is (claiming) to make are nothing compared to Rust. I'm not saying this is bad (Rust is pretty rigorous and hard to code in), but I don't really think they are comparable.
It does seem similar to Nim and Zig, I just think Rust is in a different category from almost all other languages all together.
To the problem you were originally trying to solve, why not just use Rust? Go and C are really about as related as Java and C. Rust would have met all your requirements, and has a lot of features you added to V to begin with.
That's strange, I find Rust to be much less complex than working with C++ or C. It keeps track of all the tough bits for me, and it has all of the nice expressive stuff from Haskell. With Go I kept running into cases where the language simply had no feature to save me from multiplicative complexity in my code base.
I haven't had any problems with build times thus far, how big is your project?
It's not big. But I'm developing V so that everyone can create large applications with very fast compilation times. I'm getting x120 improvement for DOOM 3, and I think it can be up to 400 times faster for more complex C++ projects using more templates, boost, etc.
Of course I didn't need a new language to make my project compile faster :) I just wanted a simpler C, and I had some experience with writing languages (I wrote 2 languages at school/uni).
Now I'm actually more excited about V than the original product I created it for :)
I had been looking for this exact project, but I couldn't remember its name for the longest time. But I remember that home page exactly. Doing a bit of seraching it turns out volt.ws appears to be a rebranding of a previously posted [1] Eul (eul.im), posted by alex-e (whom I assume is its author). Either way, volt.ws and the associated V language sound quite interesting, I look forward to hearing more about this in the future.
As an (anonymous) programming language designer, a few bits of feedback.
First, nice concept, but without open code, it might as well not exist, and without open specification, it might as well be yours alone, like one of Tolkien's languages. Closed languages wither and die, and yours seems well onto that path.
Second, what makes V compelling to you appears to be completely uninteresting to me in terms of language design. It might as well compile from V to Go; I can't see why not!
Whenever a language designer appeals to simplicity, they are usually appealing to whatever makes it possible for them to be productive, and they are usually missing that the productivity is personal because the designer is the one who builds the language. The GL demo seems to be a great example of this sort of situation.
I hope that you publish your work so that we may properly critique it.
Edit: Here is another language designer who is not me saying "closed languages die" (https://blog.golang.org/open-source). I think that, until we actually have a compiler for V (or whatever it is hopefully renamed to before release) in our hands, we ought to be extremely careful about trusting that any of this exists. It is all too common in PLT/PLD for somebody to come in with bold claims, outrageous mockups, and zero toolchain. I addressed what I saw, which is yet another compiles-to-Go hobby language. To become more than that requires a committed community and a common repository of open code, and the author appears to have only the former.
> Closed languages wither and die, and yours seems well onto that path.
This is unnecessary harsh. The author has already said it will be open sourced later. I can understand the reasons to not open source now. Managing an open source project is no small work.
Second, notwithstanding V's slim feature set, it's already more successful than 99% of language design attempts out there in that it ships. It certainly succeeds in letting the author to build his other projects faster and easier. It fulfills the author's own needs. I'm sure Perl and Python started that way.
This is his personal project. He really does not owe anyone anything, deadline or no deadline. Open source "users" have been getting really entitled these days.
> I hope that you publish your work so that we may properly critique it.
I don't know about this author, but for me, this would emphatically not be a motivating reason to publish my work. I might publish work so that someone could get some use out of it, or to show off my brilliance. But if all you're going to do is critique it (no doubt with all the familiarity born of five minutes of looking at the tutorial), then I'd just as soon you never see it.
I don't think the person you're quoting would advocate that languages must start out as open source. Go sure didn't. It was developed closed source within Google for two years before it was even announced.
I guess it depends on how the critique is delivered. I would unironically love it if an expert level in [repo language] would come along and critique my open source code.
If they were an arrogant shit-head then I'd probably just block them, regardless of the technical merit of what they wrote.
casual criticisms on hackernews are still more valuable for drawing attention to your project than in depth comment chains from renowned experts on [repository manager of choice].especially since those comment chains are undiscoverable unless you're already interested in the project(or the chain gets linked on hackernews)
plus this may just be me as a non (designing a tool language for a project) dunce speaking but reading a critique of a language/framework that i haven't thought of makes me want to try out the language and see how that shortcoming affects the way i work. it's the reason i tried out Go and Elm and Vue.js
It's not necessary for it to be a general purpose programming language, though. I think it is kind of neat to have a very specific, personal language that fits your mental model. On a larger team, probably not what you want, but there are other languages that are optimized for that.
Basically you're right if the creator wants a widely-adopted general-purpose language. But there's other valid approaches I think.
> It is all too common in PLT/PLD for somebody to come in with bold claims, outrageous mockups, and zero toolchain
This. It's crazy to me how quickly developers are ready to get behind something without even being able to use it. Jai is similar in this regard.
It's easy to make wild claims like "super fast compilation" or "can be translated from C++" when you don't have hundreds of users, all finding edge cases and wanting different things. Especially easy when you haven't released anything so everybody is projecting their favourite features onto the language.
Looks a bit like Odin (https://odin.handmade.network/) --a language explicitly designed to be small and simple I've been enjoying learning and toying around with.
For me personaly the killer fature is: C/C++ to V translator
Having all my library in the same language make a lot thing like debugging and testing easier. It also semplify the mental model i have of the program.
Also C(lang) interop is my main struggle with go. For me it is such a pain that something i wrap a c library in a stdout/stdin server and just spawn it and use cap'n'proto for communication. When you use cgo you lose the easy cross-compiling, i would love for something similar in go and/or rust.
I wonder if llvm could be used to implement a sort of universal transpiler. even if it not use the target language in a semantic language it still make a lot of thing easier.
> Originally Volt app was written in Go, but after a couple of weeks of development I decided to re-write it in C for two reasons: easier integration with existing C graphics and UI libraries and much smaller binaries. The app size reduced from ~5 MB to ~100 KB.
I'm always curious when people complain about binary sizes. Was there some reason the binary needed to be small? Or just based on some sense of 'largeness' and 'smallness'. It seems to me like rewriting in a 'less productive language' to save a few mbs of binary size that nobody cared about anyway is a pretty big waste of time.
I don't mean to come across super critical, there are cases where binary size could be really important. Say you're on an embedded platform with minimal memory. I just don't /understand why it was something worth optimising over here... especially to the point of a rewrite
So Volt seems interesting, but how to I download it?
All of the download links just give todays date and issues in the GitHub page the site links to mention it being flagged by virus or not being supported.
I'm not sure I get what is going on in the live reloading code.. if you update the draw function to change colors, then why does the block only change colors when it hits the edge of a screen?
Volt looks amazing. I've been looking for a native IM app for a while, and I couldn't find one. I hate all those electron or browser apps taking hundred of MB of ram just to display messages.
I hope it succeeds and is released soon! Thank you very much for your work.
> You can also simply translate your entire C/C++ codebase to V and make your build times up to 400 times faster. An automatic translator supports the latest standards of these languages.
Is this saying you can actually get compile time improvements compared to the original codebase? If so I can imagine a couple of ways this could work-
1) skipping the optimization the C/C++ compilers would do, with V's direct-to-machine-code generator, and/or
2) doing some heavy lifting in the translator, so the equivalent V has more redundancy and less implicit information.
As a someone who has never design the language or implemented a compiler, this seems to be on very very daunting task. Just wondering how much effort it requires and if someone like me - who has experience in developing software using high-level languages but not compilers - can implement it. It seems to be very interesting project and I would love to try something like this just for the sake of learning. I would really appreciate if someone can give me the pointers about where to begin with.
It is daunting, but getting started is actually straight forward. For instance, you could start by simply writing a domain language that translates your grammar via a tool like ANTLR or simply raw abstract syntax trees to C++.
It gets hard the more features you add to have a consistent experience and all the other tooling a language has (like debugging).
I find these problems fun and help in making you a better programmer, but the cost to be successful is immense.
I'm interested to know how the language handles memory; automatic memory reclamation without a GC and that is simpler than Rust seems like a very attractive proposition.
Yes, snake_case is the superior naming system imo. Only problem is that each underline required two keysrokes. Maybe using keyboard macro or something would help.
> You can also simply translate your entire C/C++ codebase to V and make your build times up to 400 times faster. An automatic translator supports the latest standards of these languages.
How do you plan to translate the latest C++ standard with all those template stuff?
I would be cautious about making a language with only reference counting since you can easily end up with stack overflows when implementing something as innocent as a linked list unless you keep some sort of list of objects to free in the runtime.
The programming language C was invented to write Unix. So sometimes creating a language to write software works. The defence industry (in the US) is (apparently) notorious for doing this. Any other examples?
Would it be possible to compile V to Java or JVM byte code, but by not using giant array in the middle? I'm mainly wondering if one could use GTK+ in Linux and Android APIs without JNI on Android.
Off topic, but will there be a dark mode for Volt? I can’t abide by Slack’s lack of official one and the constant overwriting of the file that maintains the hack for the unofficial one.
Very very interesting. But I hope it's renamed to something other than a single-letter name, so we can use search engines to find info about the language later.
I emphatically disagree about the waste of energy.
You do not need to always do novel work, and not every piece of code has to be in support of some specific engineering goal with external justification. I don't consider gardening a waste of energy, even if I can get groceries at the store more efficiently. I don't consider painting a waste of time, even if my paintings are bad compared to others.
> But it's definitely a wasted effort if the goal is to write a light weight chat client...
That’s vacuously true, because there are always more goals besides just writing a light-weight chat client.
I find most of what I do is balance different goals, objectives, and concerns against each other. Even if you only have external business objectives, you at least consider both the short-term and long-term.
There isn't enough detail on this page to really know what's unique about this language. However, I'd say in general that I don't think this area is "solved" at all, and there are a lot of open problems (or at least, problems in integrating different approaches together).
Consider that there are multiple languages that have evolved in this space recently, all of which take slightly different approaches and make different tradeoffs. I personally know of at least: Nim, Rust, Terra, Zig. There are probably others.
(Edit: This isn't quite right: these don't all compile to C. But they're all low-level and make tradeoffs which are similar to C.)
You can say, "well all of these already exist so why not join up with an existing effort instead of creating a new one?" Except that languages make tradeoffs that fundamentally impact other objectives. For example, Rust has already made tradeoffs which make it seem unlikely that it will ever get Terra-style metaprogramming (at least to the full extent to which Terra supports it). And that might be fine for many of the things you'd want to do with Rust. But I at least have things I'd like to do that benefit strongly from Terra-style metaprogramming, and I might like to have something that combines the advantages of both. Right now that thing doesn't exist, and it's not obvious if you could ever get there in a reasonable way from either Rust or Terra by taking one or the other and trying to build to the other side. Thus, I don't find the approach of starting over from scratch unreasonable at all.
Rust's procedural macros let you write Rust code that generates Rust code. In combination with tools like quote, it's just a much clunkier version of the same thing that Terra offers!
In short, the biggest difference is that Terra makes it very easy to interact with the compiler at every level: I can insert new keywords, and incrementally type check pieces of code to see (a) if they compile and (b) what types they produce. This makes it surprisingly easy to build really powerful tools like Regent [1]. While if you read through the rest of the thread, it does sound like it's possible to do this in Rust, it will be very painful, at least today. In particular, it would require either (a) calling out to rustc as an external program, which would probably be very slow, (b) rebuilding the parts of rustc in the DSL compiler, which realistically isn't going to happen, (c) directly hacking on rustc, which is a lot of work and would result in a hard fork to the language, or (d) radically simplifying the DSL compared to what Rust provides, which I think would be counterproductive. So, without a lot more work I think this is a no-go for the moment. But I'm eager to see if the Rust team can come up with something better, since I like Rust in many other ways and it would be great to see Rust grow a capability like this.
“Waste of energy” is generous. It seems like a genuinely bad idea unless the problem space is poorly addressed (messaging is certainly very well-trodden territory) or the app is just a hobby.
Using a new language makes a single project into two projects, at least one of which is huge.
Among the many benefits of an existing language with a mature ecosystem:
- fewer bugs
- more than one person can contribute without learning a new language
Having your own language can increase productivity. The compiler took 2 weeks according to OP. Also the fast compile times definitely help. This is also a project only OP is working on so number of contributors doesn't matter, and even then it's stupidly simple to learn any C-like language if you know one already. If this works for OP then it's definitely not a waste of time really.
It is an extra ordinary feat that the OP took only couple of weeks to create a working programming language. I always thought it takes years to create a new language. It will be interesting hear from OP, how he could create a new language in couple of weeks
A programming language is a translation scheme. If you know your scheme upfront, it's trivial to translate. C-like languages are easier to translate to binaries because well, C is glorified assembly. The analysis parts may take longer, but when you start writing a language you don't need to do much analysis, especially when writing a C-like language. Parsing can be automated, and given a clean grammar like Go's it's trivial to parse
It's not about syntax. I knew that "why another language" would be a very common question, so I'll add an answer to the landing page of V's website :)
Basically none of the existing languages have the features that I need:
1) Compilation speed. V compiles 15 million lines of code per second. Only Go and Delphi are relatively fast.
2) Simplicity & maintainability. Only Go is simple. I made V even simpler and stricter.
3) No runtime/GC. I want the language to be close to metal so that it can be used to develop small and fast native apps, games, drivers, etc. Go's out.
4) Safety. Go has nil (runtime errors), no option types (verbose error checks and unhandled errors), data races, global variables.
5) Easy interop with C. Cgo ads a lot of overhead.
I would argue that this is a niche that is underserved at the moment. There are very few powerful, fast-compiling, as-fast-as-C languages. Right now I'm aware of Nim and Zig in this space, along with Jonathan Blow's unreleased programming language Jai. I have used Nim, very briefly, for a toy project. I can say that it's really interesting and has a lot of great ideas, but it feels a little rickety. Like it's a slightly too-thin layer over C. Zig I haven't looked in to much yet. Jai isn't out yet.
There are other languages that are slightly further from this niche which might serve. Go is not very powerful, lacks generics, is garbage collected. Rust . . . I don't know. Turned off by the fact it's so hard to write a linked list in it (I know that's very superficial but it is what it is). It also doesn't compile very fast from what I read.
I think this is a great space for new ideas. The successor to C is still quite unsettled, I think, and there's space for motivated people to have an impact..
Developer here. I was going to post this here in a couple of weeks after launching the product and creating a separate site for the language with much better information about it.
I'd also like to hear your opinion about not allowing to modify function arguments except for receivers. This is an idea I got that isn't really implemented in any language I know of.
For example: mut a := [1, 2, 3]
So instead of multiply_by_2(&a)
we have to return a new array (and this will later be optimized by the compiler of course)
a = multiply_by_2(a)
I think this will make programming in this language much safer, and the code will be easier to understand, since you can always be sure that values you pass can never be modified.
For some reason all new languages like Go, Rust, Nim, Swift use a lot of mutable args in their stdlibs.
You can still have methods that modify fields, this is not a pure functional language, because it has to be compatible with C/C++:
fn (f mut Foo) inc() { f.bar++ }