I think this comment comes across as slightly ignorant.
Many examples exist where a misguided belief in scientific 'facts' (usually a ropey hypothesis, with seemingly 'damning' evidence), or a straight up abuse of the scientific method, causes direct harm.
Suspicion is often based on facts or experience.
People have been infected with diseases without their knowledge.
People have been forced to undergo surgical procedures on the basis of spurious claims.
People have been burnt alive in buildings judged to be safe.
And look at Boeing.
No one has a problem with science itself per se. Everyone accepts the scientific method to be one of our greatest cultural achievements.
But whether one is "less bright", or super smart, we all know we as humans, are prone to mistakes, and are just as prone to bend the truth, to cover up those mistakes.
There's nothing plebeian about this form of suspicion. In fact, the scientific method relies on it (peer review).
As written, possibly.
Taken literally, it's full of holes.
But if you're not a pedant, I essentially mean that most parents will vaccinate their children, many passengers will book flights, and a majority of the citizens in a population do respect their officials (etcetera).
And I think if you were to dig deeper than this, and test that hypothesis with... well... a scientific experiment of some kind, the result would probably support it.
But a good number of people will naturally question the outcome!
At their core, I still think of these things as search engines, albeit super advanced ones. But the emotion the agent conveys with it's speech synth is completely new...
Being able to feasibly feed it a whole project codebase in one 'prompt' could now make these new generation of code completion tools worthwhile. I've found them to be of limited value so far, because they're never aware of the context of proposed changes.
With Gemini though, the idea of feeding in the current file, class, package, project, and perhaps even dependencies into a query, can potentially lead to some enlightening outputs.
I understand some of the disparaging replies i regards to execution, but if I'm honest, as someone who writes a lot of React, the idea of ditching it for a universally support standard of any kind, is a very appealing idea.
React is great when you absolutely need state management on the frontend, but is overkill for the vast majority of frontend components to the point I would say it basically doubles the time to create a SPA. I wish I could just import it for one-off components and use traditional HTML rendered by the server for everything else.
We can also use React for that–the issue is the bundle size and it does not appear to be designed to share the DOM. I would love to see these smaller libraries document the process and best practices for using them within otherwise vanilla environments.
This software design principle is conspicuously absent at this point in time. Sure, Agile is important, and Agile processes do generate good products. But those processes have also come to introduce a cyclic development model, that's permeated back into the tools, and larger tool developers (like, those in the React / NodeJS class in terms of user base size) have abused this good intent.
Software can still be developed iteratively. That's not the problem. With CMake, for example, I don't 'fear' upgrades, because that team values the idea of 'finished' software. As does Microsoft.
On the other hand, the NPM and Apple dev teams do not cherish this idea. And in turn, both their user and developer communities suffer in the long run.
Microsoft will bend over backwards to maintain compatibility but at a huge cost to themselves. Can you imagine trying to fix a bug or vulnerability in some thirty year old Windows code without breaking anything. It must be like wading through treacle.
Yeah, but to be honest it is worth the cost because it benefits them in the long term. This sends a good signal to potential partner/investors (as in software developer/companies) that it is worth investing your time/ressources into the platform because you can expect things to stay stable enough to create a vision/roadmap/future.
In the case of Apple, your software better make money in the next 3 years because after that you can expect to rewrite a lot of it, if said software is still possible at all...
There is not a lot of 3D software on macOS (especially CADs), but you cannot blame the devs. Mac were already pretty anemics when it came to GPU power but if you had an OpenGL codebase you would now need to rewrite it all to metal even though it can only work on an OS with one of the smallest market share.
I feel it's because the main draw of Erlang/Elixir and the BEAM, was scalability. Since the time Elixir was introduced, the Cloud system vendors have developed many solid alternative answers to the scalability question, with products that don't even require developers to learn a new language.
The soft-realtime and failover concepts that Erlang/Elixir champions are still features that are unmatched elsewhere though. I have less experience with Phoenix, but writing a WebSocket server in Elixir with Erlang's Cowboy feels like the best possible way to write a WebSocket server. In fact, in that context (which is admittedly narrow), I think Elixir might be the best possible tool available for WebSocket server programming. Even if it's only used on a single machine.
I also see the Elixir team making efforts to push into the ML space.
If they can develop a user friendly solution to Distributed Model Training, based on OTP semantics, I can imagine Elixir pulling some attention away from Python.
But tbf, to get the most out of the Elixir, you need to learn a little about OTP, which is very different. In fact, while I won't go as far to say learning Erlang is a prerequisite to learning Elixir, it helps alot, and that can be intimidating.
And if you're doing webdev work, then Elixir would mean that you also have to manage / maintain your own servers too.
Then there's the niche status of all BEAM languages, but that's the 'network effect' I guess.
I mean, Elixir is definitely not a replacement for Rust. I imagine it as being closer to Node, Python, or possibly Java. And even then, it's a wild comparison, because of the way Actor Model concurrency is weaved into BEAM languages, in a way that no other language does.
For perf optimisations, you have Ports or NIFs (irc), where Ports are just knowingly volatile processes, and NIFs are FFIs, which is an almost universal feature across all languages.
But I suppose your experience is based on use case. And I would agree that commercial BEAM language experience is generally thin on the ground, if you're role is to build an engineering team (though I can't imagine Rust expertise being that much more abundant either).
One thing I would add, is Erlang/Elixir's emphasis on multi machine concurrency is somewhat at odds with the serverless future we're moving towards. But ignoring that (especially with vendor lock in issues), I often wonder why Elixir hadn't put a larger dent in the popularity of Kubernetes. Because BEAM's approach to managing multiple services across machines seems both much more powerful, and dev friendly.
Although I agree the sentiment plays a role on adoption, because many ask "why Elixir when I have k8s", while in practice, if you like k8s, you should enjoy Erlang/Elixir. They apply similar principles at different scales.
So I don't think we could have made a dent on k8s, but rather, we should have positioned ourselves earlier within k8s strengths. :)
The issue I found myself in is that I agree with you, that Elixirs Actor paradigm is completely different and not comparable to other languages, but at the end of the day, very very very few applications and apps that startups work on need that. Most apps now are trending towards serverless. And most of the world gets by without the “operating system in an operating system” approach of BEAM.
Rust can be massively concurrent without Actors and GenServers.
So while the Actor model / let it fail approach changed the way I think, at the end of the day I found the traditional paradigms to actually be faster and cleaner to the real world applications that made me money.
Many examples exist where a misguided belief in scientific 'facts' (usually a ropey hypothesis, with seemingly 'damning' evidence), or a straight up abuse of the scientific method, causes direct harm.
Suspicion is often based on facts or experience.
People have been infected with diseases without their knowledge.
People have been forced to undergo surgical procedures on the basis of spurious claims.
People have been burnt alive in buildings judged to be safe.
And look at Boeing.
No one has a problem with science itself per se. Everyone accepts the scientific method to be one of our greatest cultural achievements.
But whether one is "less bright", or super smart, we all know we as humans, are prone to mistakes, and are just as prone to bend the truth, to cover up those mistakes.
There's nothing plebeian about this form of suspicion. In fact, the scientific method relies on it (peer review).