These languages each have enough warts that lots of people and languages are chipping away at them.
Go and Nim are doing the strong static typing that "feels like scripting" thing. Even Rust is making a bit of a dent, despite it being a systems language with memory semantics.
There's plenty of room for all sorts of projects to try. Don't count anyone out. Even if the language never gains momentum, its ideas or developers might have some other effect. And even Nimrod was once in this state.
This is an excellent point and I think that while most languages will fade away in time, if they even contribute one new idea the project is worthwhile. If we had asked programmers in 1991 what the point of Python was, the responses would have probably been similar to the parent comment, and look at where it is now.
I played around with Vala (a C# style language that compiled to C on Linux) back in 2010 and while I never used it for anything serious, it was fun to try out new ideas and see how they could work. There is value in these projects, maybe some of it will carry over into other languages, in others they may be the dominant language in 10 years.
> If we had asked programmers in 1991 what the point of Python was, the responses would have probably been similar to the parent comment, and look at where it is now.
> This is Python, an extensible interpreted programming language that
combines remarkable power with very clear syntax.
> This is version 0.9 (the first beta release), patchlevel 1.
> Python can be used instead of shell, Awk or Perl scripts, to write
prototypes of real applications, or as an extension language of large
systems, you name it. There are built-in modules that interface to
the operating system and to various window systems: X11, the Mac
window system (you need STDWIN for these two), and Silicon Graphics'
GL library. It runs on most modern versions of UNIX, on the Mac, and
I wouldn't be surprised if it ran on MS-DOS unchanged. I developed it
mostly on an SGI IRIS workstation (using IRIX 3.1 and 3.2) and on the
Mac, but have tested it also on SunOS (4.1) and BSD 4.3 (tahoe).
> Building and installing Python is easy (but do read the Makefile).
A UNIX style manual page and extensive documentation (in LaTeX format)
are provided. (In the beta release, the documentation is still under
development.)
If the authors don't even make a value proposition themselves, I assume the language is a learning exercise and won't take it seriously. Also not really worth critiquing things I don't like about it. Indeed, I see on the introduction:
> It started as a toy language following the excellent book Crafting Interpreters by Robert Nystrom.
Nothing wrong with that! Sometimes the value is mainly for the author, and that's great. Probably not going to contribute any new ideas though.
Completely agree... but when people like a language, they may still want to write scripts in it even though it may be really uncomfortable.
That said, Nim is actually designed to feel like a scripting language... for example, you don't write a main function at all, just code away (this is an actual hello world program in Nim):
echo "hello world"
So, out of statically typed languages, Nim is probably one of the best choices for scripts.
interesting. has chatgpt/copilot been trained on nim to the point where they creates code as good they do with python?
I ask because I've always treated coding as a necessary evil. When I worked as a programmer, I never gave 2 shits about the language I was required to use. I'd write a specification, code to the specification and tried to update the spec if I need to deviate from it.
Because of my damaged hands, now, I do not have enough typing capacity to learn a new language the old trial and error way. chatgpt is my accessibility tool for creating code.
I've learned how to give chatgpt a spec, skip the coding step and goto validation. I've trained myself and chatgpt/copilot to generate python code like I had created it, making desk and unit testing easier and faster.
Learning a new language would start with a specification of the problem as above, then seeing what chatgpt/copilot generates. However, it is easier to learn if the LLM model understands the language. It is sometimes hard to tell the difference between hallucinations in the code vrs hallucination in the specification.
It ultimately depends upon what you mean by "as good". There is no clear single metric. Once you have >1 metric it becomes a subjective "who's priorities?" game of weighting them / projecting them into a single metric.
The best answer to a question like yours is: give it a try on some easy problems and see what you think yourself. No one else can really know the kinds of problems / answers / code you most work with (and sometimes the future is pretty murky even to oneself, even in these vague categories).
Disclaimers issued, some things can be said which might help. Since Nim is primarily a highly ergonomic static ahead-of-time compiled language with code running as fast as C often does, errors may be caught more conveniently. Because Python is popular, especially for teaching programming, training coverage will always be better, but Nim has some core features & keywords "kinda similar" to Python which may help on the other side.
Not sure about 4.0, but ChatGPT-3.5 does poorly on basic Nim things without Python equivalents. To give just one concrete example (out of many), `a,b = b,a` is a common way to manifest swapping in Python while in Nim one uses `swap a,b`.
So, if you are willing to do more "compiler-assisted clean-up" or have a/develop a knack at steering the random sampling toward code which compiles, Nim could be about as effective as Python used this way.
In terms of code entry work for your specific hands problem, parentheses can often be left off in Nim code and in general it seems to have much less punctuation / syntactic noise. Of course, keys can be rebound & such & maybe you do that, too. Nim definitely has more powerful abstraction mechanisms like user-defined operators, templates, and syntax macros.
Your suggestion at the end is a form of "speaking the keyboard" problem that has plagued speech-driven programming for years. It is so wonderful that I can dictate the specs and then verbally cut and paste them into the LLM. If I had the energy, I would build an LLM interface that used speech recognition-friendly text areas that one could use speech for editing and revising results.[1]
Your response, however, touches on the meta-problem of adding additional information to a training set. For example, as people learn and generate more nim code, the community can expand the LLM capabilities without going through open AI or whoever. I know training requires a lot of GPU time, so training over distributed GPUs is necessary. For example, I would buy one or two GPUs and pay for the electricity to contribute to community LLM training efforts.
[1] For anyone interested in helping, it would be a simple two-panel design: top for editing, lower for LLM results, and two buttons, first for "copy to clipboard," second for "submit to LLM."
These languages each have enough warts that lots of people and languages are chipping away at them.
Go and Nim are doing the strong static typing that "feels like scripting" thing. Even Rust is making a bit of a dent, despite it being a systems language with memory semantics.
There's plenty of room for all sorts of projects to try. Don't count anyone out. Even if the language never gains momentum, its ideas or developers might have some other effect. And even Nimrod was once in this state.