I'm happy to see static compilation in the works! If static compilation takes off, and people are able to build Python packages that are secretly just compiled Julia code, I can see a world where more people opt to using Julia over C or C++. Though writing Rust libraries that work as Python packages is a joy and Julia would still have competition.
Julia biggest problem at the moment is growth. Julia has suffered from not having exponential growth, and has either maintained a small linear growth or has fallen in popularity. Search online on YouTube for tutorials, on Twitch for WatchPeopleCode, or on GitHub for benchmarks; and Julia is not even in the room where the conversation is happening - there just isn't any mindshare.
And for good reason. There are so many ergonomic challenges when using Julia in a large codebase and in a large team. Julia has no formal interfaces, LSP suggestions that are often just wrong, and no option types. This just makes writing Julia code a drag. And it makes it quite difficult to advocate to developers experienced with languages that offer these features.
Additionally, the core conceit pushed by Julia advocates is that the language is fast. This is true in controlled benchmarks but in real-world scenarios and in practice it is a real pain to write and maintain code that is fast for high velocity teams because it requires a lot of discipline and a strong understanding of memory allocation and assumptions the Julia can and cannot make. You can write code that is blazingly fast, and then you make a change somewhere else in your program and suddenly your code crawls to a halt. We've had test code that goes from taking 10 minutes to run to over 2 hours because of type instability in a single line of code. Finding this was non-trivial. For reference, if this were uncaught our production version would have gone from 8 hours to 4 days.
The lack of growth really hurts the language. Search for pretty much any topic under the sun and you'll find a Python package and possibly even a Rust crate. In Julia you are usually writing one from scratch. Packages are essential to data processing are contributor strained. If you have a somewhat unpopular open source code code you rely on that doesn't work quite work the way you want it to, you might think I'll just submit a PR but it can languish for months to a year.
The Julia community needs to look at what programming languages are offering that Julia developers want and will benefit from. The software world changing very quickly and Julia needs to change too to keep up.
> We've had test code that goes from taking 10 minutes to run to over 2 hours because of type instability in a single line of code.
For those who might not be familiar, tooling can sometimes help a lot here. The ProfileView.jl package (or just calling the @profview macro in VSCode) will show an interactive flamegraph. Type instabilities are highlighted in red and memory allocations are highlighted in yellow. This will help to identify the exact line where the type instability or allocation occurs. Also, to prevent regressions, I really like using the JET.jl static analysis package in my unit tests.
If they are so easy to identify, why not just make it a JIT error. Manually inspecting all of this sounds awful. I'd rather my compiler just do it for me.
Dynamic behaviors can be a nice default for all the spots where performance is not critical. So Julia lets you code like Python in places where performance doesn't matter, and then code like C++ or Fortran in places where it does.
I really like Nim but this identifier resolution seems to be the only topic that comes up against Nim, and I really doubt this "feature" is worth the consternation when pitching the language to someone new. I wish the Nim developers would just admit that it is not worth the effort and make it opt-in instead of opt-out.
Personally, I agree, but the Nim lead dev looked into it and declared it "unreasonably hard to implement with unknown effects for template instantiation"
I wish he had detailed the problems more as I think opt-in to insensitivity at import time with some kind of compile-time error for ambiguity is The Better Answer. { For then Nim would not be taking away ident choice from (many) creators in order to support the ident-pickiness of (many) consumers. It's hard to measure/know/forecast, of course, but I suspect the 2nd "(many)" (the number of such very picky consumers) is low in absolute terms in spite of usually high name-consumer/name-producer ratios. A huge fraction of people just go along with whatever the name-producer did, a small fraction complain loudly, and an even smaller fraction would care enough to opt-in to insensitivity if such were available and, say, sometimes caused trouble. There is also a "broader system" aspect of the producer-consumer ratio changing as a prog.lang goes from dark horse/obscure/niche to popular (with a lot more consumers). }
All this said, I would reinforce what others here have said - the property is not actually as onerous in practice as you might imagine. So, it is really worth suppressing knee-jerk reactions to this one feature and giving the language a try, just with `nim c/cpp/js --styleCheck:usages` or whatnot.
> One way data binding that is immutable and has explicit functions for each state transition is a major feature.
> <Tag a="b" x={y+z}>{valid JS expression}</Tag>
> is
> React.createElement(Tag, { a: "b", x: y + z }, [<valid JS expression>])
If you take the main reasons React is criticized and claim it is a feature, surely you have refute the criticism more thoroughly than "This seems so simple, yet do not underestimate it." or "That is powerful. Do not underestimate this."
Modern frameworks (Svelte / Vue / Astro) are about using the platform. They are performant, efficient, easier to read, easier to write, and easier to understand.
I don't see any reason I would pick React for a greenfield project anymore.
I get that if YOU don't want to use a modern framework and want to stick with what you know, sure, by all means, pick React. But writing even a semi complex application in both React and Svelte should make it immediately obvious that React is antiquated, if you give both frameworks a fair shake.
>They are performant, efficient, easier to read, easier to write, and easier to understand.
Proof? Source?
>I don't see any reason I would pick React for a greenfield project anymore.
I don't see why you wouldn't. It's stable, performs well, works in every browser, easy to find answers for problems you run into, and almost every knows it (or should, it's 2024, you don't have an excuse anymore).
>I get that if YOU don't want to use a modern framework
React is the modern framework. It's nimble, concise. The other frameworks are regressive -- they make mistakes that older frameworks already highlighted as being problematic over time.
You should read this article, it was quite good, even if you do continue to use React after. It's good to understand the alternatives, even if you never use them.
It's been a few months since I read it, but I recall the main thing I took away from this was: React isn't necessarily the best choice, other frameworks provide better performance, development experience and tooling that should also be considered
Of course these are just opinions. Everyone should consider all the facts and come to their own conclusions about what they use and don't use.
Companies will continue to use React of course. But I'm not sure if I would use vanilla React for anything I have complete control of.
I’ve done exactly this. Can confirm you are correct. Svelte is also much more performant in the client. But react is great in its own ways and I particularly like the way it tends to point developers towards composition of small components.
I remember thinking I’d like to pick up react for some changes we were planning at work. I spent a weekend with Road to React making a simple web app. I was amazed how overly complicated it seemed and that I was being told “there’s no DSL” but JSX sure seemed like one. I’m a moron and not some special coder so maybe that’s why. I was also using rails 6 at work at the time, but 7 seems to have eliminated any need we thought were were going to have for react so that’s been nice.
Each person that I’ve come across who likes react is super smart and I can’t follow what they are trying to say is so great. So maybe don’t listen to me anyway.
I think YouTube will "win" this war, unfortunately. Even people that "join" individual channels cannot watch content from those channels if they use AdBlock. At some point these people are either going to buy premium or turn off the ad-blocker.
It boggles my mind that engineers and managers that have kids and are working on the YouTube team think this is okay and this is what the future of the platform should look like.
It's the beginning of the end of an era, and I'm immensely sad to see what the internet has become and where it is heading.
> At some point these people are either going to buy premium or turn off the ad-blocker.
Or they stop going to YouTube. In the past 12mo my use of these platforms has reduced a huge amount. I no longer visit Twitter or Reddit because they blocked 3rd party apps, forcing me to view adverts, I would rather not be a user on a user hostile platform.
Deep down, most of us know these are huge time sinks providing very little real value to ours lives so it doesn’t take much friction added by the platform to make people turn away from their bad habits.
YT is too big for people who hate ads to not invest a massive effort. The ad blocker endgame will be YT videos being "pre-watched" by a headless browser running in the background and the video data grabbed from there (either recorded or directly). You then watch the ad-less video in a custom frontend. If implemented correctly there's pretty much nothing YT could hope to do about this.
Sounds like how DVRs worked on cable TV back in the day. Some were even sophisticated enough to use the typical video cutout before commercials to auto skip them on playback.
Now, of course, we have community efforts like SponserBlock that could easily identify ad locations or some form of auto detection based on analyzing the video of they insert the ads at random locations in the video stream itself.
I'm certain this is coming. There's very little on YT I need to watch right now, and having a bunch of videos already downloaded and de-ad-ified would suffice. It would prevent the mindless watching anyway.
I'd even be cool with just blanking my screen and muting the sound when an ad is playing. Given my usual YouTube watching mindset, a brief moment to to just breath would be good for me.
The error handling in Go is SO verbose.
When reading my code (or even reviewing other people's code) in order to understand at a high level what is going on, I feel like I'm squinting through a mesh wire window.
let lat_long = fetch_lat_long(¶ms.city).await?;
let weather = fetch_weather(lat_long).await?;
let display = WeatherDisplay::new(params.city, weather);
Maybe on first glance the Rust code can seem alien (what is a `?` doing there, what is actually going on with `.await`, etc) but when you are writing a 100k line application in Rust, you learn the patterns and want to be able to see the domain application logic clearly. And yet, there's no hidden errors or exceptions. When this code fails, you will be able to clearly identify what happened and which line the error occurred on.
Prototyping even small applications in Go is verbose. And worse still, error prone. It's easy to be lazy and not check for errors and oops 3 months in your code fails catastrophically.
I know a lot of people like Go on here but in my mind Go only makes sense as a replacement for Python (static compilation, better error handling than Python, faster etc). If you don't know exactly what you want to build, maybe it is faster to prototype it in Go? I still would reach for Rust in those instances but that's just me. For large applications there's no question in my mind that Rust is a better choice.
Edit:
people have pointed out that I'm not comparing the same thing, which is true, I apologize for the confusing. But even code where Go propagates the
errors, it is much more verbose (and my point still stands)
err := db.Get(&latLong, "SELECT lat, long FROM cities WHERE name = $1", name)
if err == nil {
return latLong, nil
}
latLong, err = fetchLatLong(name)
if err != nil {
return nil, err
}
err = insertCity(db, name, *latLong)
if err != nil {
return nil, err
}
And this is extremely common. More discussion in the thread below.
Go is a great replacement for Python as a web backend language (which Python really is not). I'm not sold on Rust as a web backend language, though: It ends up being a little too hard to work with (hello `async`) in that application, and you need to import a lot of 3rd party dependencies that are very opinionated. That stuff and the complexities of working with the borrow checker and async adds a lot of complexity to your large, long-running applications that you don't have to manage in Go.
I think Rust is a fantastic systems language that is misapplied to the web. I think Python was a fantastic scripting language that is misapplied to the web, too, so you can put that in context.
I agree that Go's web backend features make it fun to prototype an application. But the moment I want to do anything more complicated, then I'm not sure.
I counted the number of lines in my work projects, and I have $WORK projects that are 100k lines of code. Maintaining that in Go would seem like a nightmare to me, but in Rust that is so much nicer. My personal projects range from 10k - 35k and in all of those I much prefer the ones where I'm writing and maintaining Rust vs Go when it comes to similar complexity.
It sounds like you have a strong personal preference for Rust, which is fine. I'm pretty sure nobody loves Go as much as many people love Rust.
Even 100k LOC is pretty small for a software project, and likely doesn't need more than a few engineers. The advantage of the simplicity of Go shows up when you have to coordinate across >100 people, many of which are kind of mediocre, and you need all of them to ship features. If everyone in the world were a genius who is obsessed with writing clean code, Rust would be a fantastic language to work in at that scale, but they are not.
For clarification, these are 100k LOC projects where I'm the only software engineer. I've worked on larger projects in C++ with other engineers, and would absolutely continue to prefer Rust as the size of the codebase increases. I guess my point is that Rust scales in a way that few languages do. Go comes close though :)
This has been my primary objection with Go, as well. I wonder if it's just a lack of practice and that I'd eventually git gud, but I find it so hard to flow through code to get a general idea of what's going on. It's basically impossible to use code "paragraphs" to separate logical groupings of functionality because of the `if err != nil` blocks, and leads to a very choppy reading experience. With any non-trivial logic, I've found Go to be detrimental to my understanding of what's going on.
Sure, but this code propagates the errors and that has the same problem:
err := db.Get(&latLong, "SELECT lat, long FROM cities WHERE name = $1", name)
if err == nil {
return latLong, nil
}
latLong, err = fetchLatLong(name)
if err != nil {
return nil, err
}
err = insertCity(db, name, *latLong)
if err != nil {
return nil, err
}
In Rust propagating errors is a lot more succinct and easy to do. It is usually what you want to do as well (you can think of Python and C++ exceptions as essentially propagating errors). The special case can be handled explicitly. In Go, you have to handle everything explicitly, and if you don't you can fail catastrophically.
I guess it comes down to what features the language provides that makes it easy to do "the right thing" (where "the right thing" may depend on where your values lie; for example, I value correctness, readability of domain logic, easy debugging etc). And in my opinion, it's easy to do what I consider bad software engineering in languages like Go.
The point of verbosity in Go error handling is context. In Go, you rarely just propagate errors, you prepend them with context information:
val, err := someOperation(arg)
if err != nil {
return nil, fmt.Errorf("some operation with arg %v: %v", arg, err)
}
It really sucks when you're debugging an application, and the only information you have is "read failed" because you just propagated errors. Where did the read happen? In what context?
Go errors usually contain enough context that they're good enough to print to console in CLI applications. Docker does exactly that - you've seen one if you've ever tried executing it as a user that isn't in "docker" group:
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create: dial unix /var/run/docker.sock: connect: permission denied. See 'docker run --help'.
If your error is 5 or 10 levels deep, do you prepend contextual information every time? External libraries typically have good error messages already, why do I have to prepend basically useless information in front of it?
Not to pick on any of these projects, but this pattern is way too common to not have a some sugar:
Wait that’s interesting and I haven’t formulated it this way.
It reminds me of A Philosophy of Software Design:
The utility of a module (or any form of encapsulation such as a function) is greater, the smaller the interface relative to the implementation. Shallow ve deep modules.
Error handling might be a proxy to determine the semantic depth of a module.
Another perspective: (real) errors typically occur when a program interacts with the outside world AKA side effects etc.
Perhaps it’s more useful to lift effects up instead of burying them. That will automatically put errors up the stack, where you actually want to handle them.
> Perhaps it’s more useful to lift effects up instead of burying them. That will automatically put errors up the stack, where you actually want to handle them.
This perspective also made me think of another advantage of Go error handling over traditional try/catch logging:
In other programming languages, when doing some asynchronous work in some background worker or whatever, exceptions become useless as error reporting mechanisms, because stack traces don't correspond to the logical flow of the program. I remember Java's Spring to be exceptionally painful in this regard.
In Go, since errors are values and you're expected to add context information whenever appropriate, you can shift and shuffle those errors between stacks, and still keep useful context information when you eventually read those errors.
It does the same thing as the Go version. The Go version requires you to pass into the context the data to be sent back to the client, while the Rust version uses the return value of the function as the data to be sent back. The framework then serializes that appropriately.
The only issue there is if you want to return the error with code 200 (which you shouldn't, but it's been known to happen). In that case the Go code and the Rust code will look a bit closer to each other because then you can't use `?` this way (without writing some more boilerplate elsewhere).
Funny, I have the exact opposite reaction to those examples. I look at the Rust code and think "what happens when something goes wrong?" There's no way to tell from the code you gave. The error handling is somewhere else. Whereas, I can see exactly how the Go code is going to behave.
I like that errors are in your face too, but with the caveat that only when they matter. And in Go, the lazy thing will result is a bad time. You can always bet on people being lazy.
Like take a look at this pattern:
err := db.Get(&latLong, "SELECT lat, long FROM cities WHERE name = $1", name)
if err == nil {
return latLong, nil
}
latLong, err = fetchLatLong(name)
if err != nil {
return nil, err
}
err = insertCity(db, name, *latLong)
if err != nil {
return nil, err
}
Is it really necessary to have the error handling explicit in this case? Go through any of your go code. I find this kind of error handling is 90% of the error handling I write.
If those calls can cause errors, then yes, it's necessary to handle them. Maybe you're content to have a contextless stack trace printed whenever things fail, but I like a little order.
1k, 100k, 10m loc does not change anything because no project depends on all the loc as a single unit, everything is split into modules / packages / class / functions.
Kubernetes is over 1.5M loc and I've not seen problem with error handling.
This talk on YouTube seems to suggest that not everything is all hunky dory with Zig as a C/C++ compiler (with reference to libc) but I cannot remember any details since it has been almost a year since the talk was published.
I wonder if things have changed? Does anyone know?
No this talk is about something entirely different, the idea of giving a libc interface to Zig's standard library in order to use it whenever a C project needs a libc dependency and wants static linking. Currently Zig uses musl for that but in a future it could just use its own stdlib instead.
As an addendum, using zig to build libc is going to be amazing. It'd be like a compile-time ld_preload -- so less magical and you have much more control over the scope
People that are saying this is SEO, in my opinion, haven't read or used RealPython. It is genuinely one of the best Python learning websites out there. Their articles are written by people knowledgeable in the field, and usually comprehensive and thorough, while still being easy to understand.
I also think Google's ranking is fair in this case, assuming you want to know most high quality information about "how to iterate through a dictionary". I certainly would prefer this kind of a RealPython article with a table of contents so that I can find exactly what I am looking for.
I do think Google (and the web in general) has started to suck because of SEO spam, and generally agree with the authors premise. Searching for recipes suck, searching for reviews suck, searching for places and events also suck.
I just think this particular instance is a bad example. RealPython values your time, and I wish there were a lot more websites that valued my time too.
When looking for Python questions, it honestly ranks above any random StackOverflow link or the docs for me. The detailed table of contents with links that the author labels “…so…much…nonsense”, is actually the kind of precise “take me to exactly what I want to know quickly” solution they seem to yearn for.
Right, I’m genuinely confused about what the author thinks would be better than that table of contents.
I guess they wanted an answer to their specific question and absolutely nothing else? That might work for something as simple as “iterate through a dictionary” but in many other cases the full context is useful so you can figure out for yourself whether the immediate answer actually solves your problem.
If the author wanted to write "How to do absolutely anything with a dictionary" they should have titled the article that.
Was the first 20 paragraphs on what a dictionary is really useful? No, it's unnecessary verbiage that anyone clicking on an article with the title knows.
I had no idea RealPython was so well respected, but the example article is really quite egregious. I think the problem is the title was written for experts, but the contents were written for a beginner. Content mismatch.
One of the main reasons Zig was interesting to me was the fact that I could drop it in as an alternative to a C/C++ compiler. On Windows, my friends have mentioned how it is easier to install Zig as a C/C++ compiler than any other alternative.
If this proposal is accepted, I personally think Zig will drop to the popularity level of Hare or other extremely niche languages. Getting my colleagues to even try Zig out required me sending them articles about how Uber was using it in production. There is no way my colleagues would even have given it a second thought if it didn't have immediate value to their existing projects.
But I get where the proposal is coming from. LLVM compile times can seem awful, and there's lots of neat optimization tricks you could implement with your own byte code. And dealing with bugs in LLVM is basically a no-go, I've seen this happen in the Julia ecosystem as well.
If my recommendations are worth anything, I think Zig should
1. Use a custom bytecode for debug builds - fast build times, fast debug times etc
2. Use LLVM for release builds - fast runtime performance, slow release performance
If they can manage 1.) while still maintaining support for cross compiling C/C++ (just pass that part off to LLVM?) I think that might be the best of all worlds, with the tradeoff that there's additional backend code to maintain.
> And dealing with bugs in LLVM is basically a no-go, I've seen this happen in the Julia ecosystem as well.
As one of the folks dealing with LLVM bugs in the Julia ecosystem.
Yes it requires a distinct skillet different from working on the higher-level Julia compiler and yes it can sometimes take ages to merge bugfixes upstream, but we actually have a rather good and productive relationship with upstream and the project would get a lot less done if we decided to get rid of LLVM.
In particular GPU support and HPC support (hello PPC) depends on it.
But this is also why we maintain the stance that people need to build Julia against our patchset/fork and will not invest time in bugs filled against Julia builds that didn't use those patches. This happens in particular with distro builds.
Having had to work in a large pascal codebase, I don't ever want to use this language again.
No metaprogramming in the language meant we had some parts of the code that were written in pascal to generate pascal code before compilation of the main project.
The code was so verbose, and to get the same thing done in alternative languages would have easily been twice as shorter.
Refactoring always took twice as long as I thought it would. Just moving semicolons around was enough to break my concentration when I was in the flow.
I think the kicker was identifiers being case insensitive. That alone would have been enough to drive me crazy. People complain about Nim's case insensitive features a lot, but Nim's implementation is actually good and orders of magnitude better than Pascal's.
Also hiring a good pascal programmer was next to impossible for us at the time.
I don't know why anyone would pick Pascal today over Nim, Zig, Rust, Julia, Go etc.
> No metaprogramming in the language meant we had some parts of the code that were written in pascal to generate pascal code before compilation of the main project.
Weird...if you are talking about Generics, FreePascal has them and work just fine, plus it compiles instantly!
Here's a demo:
program UseGenerics;
{$mode objfpc}{$H+}
type
generic TFakeClass<_GT> = class
class function gmax(a,b: _GT): _GT;
end;
TFakeClassInt = specialize TFakeClass<integer>;
TFakeClassDouble = specialize TFakeClass<double>;
class function TFakeClass.gmax(a,b: _GT): _GT;
begin
if a > b then
result := a
else
result := b;
end;
begin
{ show max of two integers }
writeln('Integer GMax: ', TFakeClassInt.gmax(23, 56));
{ show max of two doubles }
writeln('Double GMax: ', TFakeClassDouble.gmax(23.89, 56.5));
end.
> > No metaprogramming in the language meant we had some parts of the code that were written in pascal to generate pascal code before compilation of the main project.
> Weird…if you are talking about Generics
But aren’t they very clearly talking about metaprogramming, which is a completely different thing than generics?
Yes, but this example of matrix multiplication does weird stuff:
{$MODE DELPHI}
{$modeswitch advancedrecords}
type
TMatrix<T, R, C> = record
fCoordinates: array[R, C] of Double;
end;
function mul<T, R, X, C>(A: TMatrix<T, R, X>; B: TMatrix<T, X, C>): TMatrix<T, R, C>;
var
i: R;
j: X;
k: C;
begin
Writeln('yep');
Writeln(High(R)); // 3
Writeln(High(X)); // 4
Writeln(High(C)); // 3
for i := Low(R) to High(R) do
for k := Low(C) to High(C) do begin
Result.fCoordinates[i, k] := 0;
for j := Low(X) to High(X) do
Result.fCoordinates[i, k] += A.fCoordinates[i, j] * B.fCoordinates[j, k]
end
end;
type
R1 = 1..3;
R2 = 1..4;
var
A: TMatrix<Double, R1, R2>;
B: TMatrix<Double, R1, R1>;
C: TMatrix<Double, R2, R1>;
begin
mul<Double, R1, R2, R1>(A,B); // should fail because B would have to have as many rows as A has columns--and it doesn't.
mul(A,C) // does not work even with a right-size C--even though it should
end.
The second bit is because you lack the explicit specialization - you must always tell the compiler how to specialize a generic. Personally i prefer to use generics in objfpc mode as you are more explicit that way. However FPC trunk can allow implicit function specialization if you request it with {$modeswitch implicitfunctionspecialization}. Your code compiles with that.
The first bit is an interesting case and i wonder why it happens. It seems the root cause is that it considers the two type specializations equivalent as even ignoring the generic functions something like "A:=B" is allowed. I trimmed the code down to this:
{$MODE DELPHI}
type
TMatrix<R, C> = record
X: array[R, C] of Double;
end;
type
R1 = 1..3;
R2 = 1..4;
var
A: TMatrix<R1, R2>;
B: TMatrix<R1, R1>;
begin
A:=B;
end.
This is clearly a bug and not intentional behavior as doing the specialization by hand shows an error as expected. Also it seems related to multidimensional arrays as with a single element it also shows an error. Note that the other way (B:=A) doesn't work, so my guess is that at some point the type compatibility comparison only checks if assigning one array would fit memory-wise into the other.
I'll check against latest trunk later and if it happens i'll file a bug report.
The types themselves count as assignable since the values can be compatible and unless the compiler can tell statically a value is out of range it wont complain at compile time (but will at runtime if range checking is enabled).
However using the ranges for arrays is not compatible, if you have a A: array [R1] of Double and a B: array [R2] of Double you can't assign one to the other.
Free Pascal had support for generics since 2006, predating even Delphi's. While that isn't as long as Wirth's Pascal, it still is almost two decades old which i wouldn't call that recent.
From the point of view of someone who has worked on MLOC delphi projects for seven years:
* The language has evolved. Delphi's pascal compiler has generics, lambda expressions for both proc and function, it has containers, map and filter functions.
* Design-time: You know how your code will behave without running it, as you are dragging and dropping components. It will draw data from your database and show it in dropdowns and grids. You have fine-tuned control ober widget alignment and anchoring .
* Data modules: Having your orm and data layer as composable components enforce separation of ui and db operations.
* Devexpress: They make the best grid components by far. I use their web products as well.
For the cases Delphi is a great choice, I'd steer clear as far as possible from julia, zig, go and nim. Maybe C# comes close.
> I don't know why anyone would pick Pascal today over Nim, Zig, Rust, Julia, Go etc.
When writing desktop apps probably the familiarity with well known IDEs such as Delphi or Lazarus makes the difference. Unfortunately decades passed, still nothing comes even close to them for rapid GUI development.
Should one day Lazarus support different languages such as the ones you mentioned, we'll probably hear a rumble around the world. However things in certain fields progress much quicker, and I wouldn't be surprised at all if in some time we could use AI to analyze a drawing then create the corresponding desktop interface description to be linked with non GUI code.
Lazarus is a compelling reason to pick Pascal. But I share your pain and had been looking for alternatives for machine-code-compiled RAD GUIs. To add, only modern Delphi supports var declarations that can be placed elsewhere other than the top of the function.
Actually, the PascalABC dialect[1][2] supports using variables in that way too. However, I've seen arguments that allowing variable declarations anywhere tends to lead to sloppier coding and unnecessary errors, so Free Pascal/Lazarus has not followed that trend. It also appears that Delphi/Embarcadero may have went in that direction to synchronize their C++Builder and Delphi products, to make it easier to jump between them.
If I had to do a desktop app I’d with Java Swing because of the Java backend libraries and developers. Were I yo ignore those factors I’d go with Gambas for the drag & drop gui.
When were the last time I code in Swing/JavaFX? 4 or 5 years ago, perhaps.
The 3rd party libs available for Delphi/FPC are more than enough for my needs.
I hate Python, and I've met others who feel the same.
It's a language which had a reason to exist 25 years ago, but has since been surpassed in every way by other faster, more robust, more compatible languages that don't have such quirky syntax. So a lot like Pascal actually. (To be fair to Pascal, its syntax wasn't anything as inconsistent and frustrating as Python's.)
Python has become the lowest common denominator. It's a practical "no-frills" language that can be picked quickly by a sysadmin, a web developer, a scientist doing number-crunching or a kid automating their homework.
The problem with the “no-frills” proposition is that it’s loaded with frills, complexities, and decades of tech debt. Just installing dependencies for a Python project can feel insurmountable.
IMO JavaScript should be the default for any Python use case. It has the same deceptive veneer of beginner-friendly dynamic behavior, same kinds of footguns; but at least JS has a single package manager, much faster optimized engines, and you’d have to learn it for front-end work anyway so it has long-term dividends.
Having used both Python and JS, the only real benefit the latter has it may be faster.
The language has way too many gotchas. I wish they could shed all the legacy aspects to it. My experience with Python, along with many other folks, is that if we don't know something (API, syntax, etc), we often just guess it and it turns out to be right. Fairly intuitive.
Agree with the other commenter: Using pip + venv tends to solve the majority of packaging problems. In my career I deal with a Python dependency headache once every few years.
> and you’d have to learn it for front-end work anyway so it has long-term dividends.
One of things I really like about Python is the huge standard library.
Contrary to JavaScript (which I think is a really weird recommendation), where you need to npm install half the world.
It wouldn’t matter if those dependencies where stable, but JS has a culture of abandon libraries or to make backward-breaking changes every week, so upgrading a project after just a few months is a major pain
I've learned Javascript and a framework or two and I hate it with passion.
I patiently wait for the moment in which we can manipulate DOM from within Weabassembly. Meanwhile I shamelessly push for Blazor for front-end, wherever I can.
Python is absolutely not a "no-frills" language. It is loaded with features, I'd be willing to bet that 99% of python programmers don't know all language features (even relatively old and frequently used ones like metaclasses) and that virtually nobody knows the whole stdlib
> It's a language which had a reason to exist 25 years ago, but has since been surpassed in every way by other faster, more robust, more compatible languages that don't have such quirky syntax.
The same could be said about C++, but yet, here we are. Stroustrop's quip comes to mind: "There are languages people complain about, and languages no one uses". Just look at how much people complain about JS.
I don't know how old people here are, but from the early 2000's till probably the mid 2010's, Python really was the great language, with few alternatives - at least when you consider the libraries available for it.
It's not an accident that people began using it heavily for numerical computing. The only viable alternative at the time was MATLAB.
I think this sentiment underestimates the amount of work people do on existing codebases.
The most reviled languages are often previously loved languages that made writing code easier at the cost of reading and understanding it.
Team 1 secretes reams of instant legacy code, but get money and promotions for shipping it. Team 42 gets handed a steaming pile of manure that is beyond understanding. They then pick up the pitchforks because it is obviously the language's fault.
So, yeah, it isn't the language so much as the average incentives and behaviors around development.
Hahah. I like Objective-C with ARC (in spite of its complexity and limitations), but Swift is more compact and I'm probably never going back (except maybe for hobby projects like writing something for GNUstep.)
I also like how Objective-C++ can mix in C++ code bases. It's an amazing chimera of a language.
Most professional programming is done in already existing code and new projects often have to respect existing standards. You rarely get to choose what language to use.
I was recently asked to recommend a language for a new team. My boss (who won't be writing or even reading any of the code) suggested Python. His reasoning: it's what the kids are learning in college these days.
So here I am as the team Senior, looking at current and future team members and deciding on whether to be selfish and choose what's good for me, or trying to figure out what makes something good for the team. Is it better to pick something with a large hiring pool? Is it better to pick something with fewer footguns? Is it better to pick something with higher performance? Is C# really ok if we need Mac support? Is Swift ok for one-off glue apps that run on Linux? Is Zig too young? If a team of five has a Java geek, a Rust zealot, a C# fan, and somebody who only knows Python, can we just agree to all learn and use Go? If I know my replacement will undo whatever I choose, does it really matter?
Not always, but the most languages I have used professionally were my choices.
That means x86, assembly, C, C++, C#, Python and half of Javascript. For Javascript I mean half because I enjoyed doing simple things in JS on front-end long time ago, but these days I kind of dislike JS frameworks and I've learned them because I had to after I signed some contracts. I could have stayed 100% with backend work, but now is too late to complain.
I managed to steer clear of Perl, Objective C, Cobol and other languages I don't enjoy. I managed to learn F# and some Ocaml which I would like to use but never got the chance (personal projects excluded).
Yes, I do understand that, having programmed in business environments with legacy code for over a decade.
I've hated the poor decisions my predecessors had to make, the corners that were cut.
I've hated that it's not maintained or documented.
I've hated that it doesn't have tests.
Or the seemingly lost successor, Oberon. An entire graphical OS and compiler was written in, and completely documented, in a book: "Project Oberon: The Design of an Operating System and Compiler" [0]
But yes, modern stuff should not be judged by the 1970 edition. I got soured on Pascal by having to use an ISO standard compiler on an embedded system; it was painful in several areas. But I shouldn't judge modern C by pre-ANSI C, or modern C++ by cfront. So, while I still say that ISO Pascal was deeply flawed, I shouldn't hold that against all Pascal versions.
Totally agree. Various people seem to be out of touch with modern developments, misinformed, or are being disingenuous (as really advocates for a competing language). A lot of the discussions surrounding Pascal are as if Object Pascal (and its dialects), Delphi, Lazarus, or Oxygene (RemObjects) don't exist or never happened.
Python is the Settlers of Catan of programming languages. Most people have a different favorite game they would rather play, but enough people don't hate it that you end up playing it anyways.
I don't hate it, but it's just mediocre. Python is fine for short scripting as long as you don't need dependencies (which still remains rocket science in Python world), at that point it's better to look elsewhere.
Dependencies in Python have been sorted for years. Pip works great for install and pip-tools compile works great for locking. Please stop spreading this untruth.
It is a current truth that there are modern apps of interest to me written in Python that I can't install because I need some special dependency manager or environment manager or something and it looks like I will do serious harm to my system environment if I follow the installation instructions.
I'm not sure I understand you. If the project uses something "special" like poetry, hatch, pdm, etc., you only need to install those for development. If you simply want to use the project, they all should be installable directly with pip, as they all have pep517-compatible backends.
Even then, how can installing those tools harm your system? You can always use pipx and have each tool be installed in its own virtual environment.
My biggest problem with pip-compile is that it's really slow. Pip-compile runs can take upwards of 20 minutes, depending on how many dependencies your project has.
We use it because we don't have any alternative. I'd really love to have a faster tool though.
No, I mean the object file defines the public symbols at the start of the file. Go very deliberately stole that feature. Rob Pike said that using that idea let Go do something faster when compiling, though I don't remember the details.
There is some embellishment in that statement, but people coming from Pascal/Oberon would likely feel much more comfortable using Go, than other C family languages. This would also include Vlang, and to an extent, Odin as well. These two have both been heavily influenced by Go and Pascal.
FWIW Free Pascal's FCL has the fcl-passrc package that provides units for scanning FPC code, building syntax trees, resolving identifier references and writing/formatting source code.
Free Pascal comes with pas2js which "transpiles" Free Pascal code to Java Script and is written using fcl-passrc so it should have enough functionality to parse most FPC code.
Go neither, but parsing and AST are in the standard library, and code generation is in the standard build process. All it requires is an (ugly) comment line in your source code. So you've got decent and standard tooling. No make magic required.
Julia biggest problem at the moment is growth. Julia has suffered from not having exponential growth, and has either maintained a small linear growth or has fallen in popularity. Search online on YouTube for tutorials, on Twitch for WatchPeopleCode, or on GitHub for benchmarks; and Julia is not even in the room where the conversation is happening - there just isn't any mindshare.
And for good reason. There are so many ergonomic challenges when using Julia in a large codebase and in a large team. Julia has no formal interfaces, LSP suggestions that are often just wrong, and no option types. This just makes writing Julia code a drag. And it makes it quite difficult to advocate to developers experienced with languages that offer these features.
Additionally, the core conceit pushed by Julia advocates is that the language is fast. This is true in controlled benchmarks but in real-world scenarios and in practice it is a real pain to write and maintain code that is fast for high velocity teams because it requires a lot of discipline and a strong understanding of memory allocation and assumptions the Julia can and cannot make. You can write code that is blazingly fast, and then you make a change somewhere else in your program and suddenly your code crawls to a halt. We've had test code that goes from taking 10 minutes to run to over 2 hours because of type instability in a single line of code. Finding this was non-trivial. For reference, if this were uncaught our production version would have gone from 8 hours to 4 days.
The lack of growth really hurts the language. Search for pretty much any topic under the sun and you'll find a Python package and possibly even a Rust crate. In Julia you are usually writing one from scratch. Packages are essential to data processing are contributor strained. If you have a somewhat unpopular open source code code you rely on that doesn't work quite work the way you want it to, you might think I'll just submit a PR but it can languish for months to a year.
The Julia community needs to look at what programming languages are offering that Julia developers want and will benefit from. The software world changing very quickly and Julia needs to change too to keep up.