Hacker News new | past | comments | ask | show | jobs | submit login
A first naïve look at Truffle Ruby (github.com/oracle)
118 points by thibaut_barrere on Feb 2, 2018 | hide | past | favorite | 65 comments



It's a bit sad that my first reaction is to go, "oh it's Oracle" and then look at the license.

It's GPL LGPL and Eclipse, which is convoluted but otherwise uninteresting.


It's GPL or LGPL or Eclipse. You can pick whichever you want - it's not a convolution of all three. The licence comes from JRuby, which we forked from.


I hadn't heard of Truffle Ruby before now — it looks pretty awesome. My experience with ruby has so far been limited, but as much as I've used it, I don't really see any downsides to the language except speed (for web applications, anyway).


Things I dislike the most about Ruby:

- Raw performance

- Runtime metaprogramming from hell

- Monkey patching

- No real concurrency support, tons of non-threadsafe code and gems

- Too Rails-centric

- Little presence outside web development

- Little to no real innovation in the last years

- Mutability

- No optional / progressive types or type annotations

===

Things I love about Ruby:

- Clean syntax

- Although it's object oriented, it allows using a functional style

- Functional collection operations

- Productivity

- Flexibility

- Maturity

- Availability of high-quality libraries

- Availability of hosting solutions

- Friendly community

- Nice stack traces


I think a lot of the problems with Ruby are really problems with the nature of Rails jobs over the past 5 years. Lots of juniors (which is great) in a field where people change jobs every year or two (which is okay, but horrible from a codebase standpoint). Giant monorails applications grow strata of different generations of code/teams/opinions; and each new generation of engineer on the project adds their own stamp on top. It leads to situations where you have enormous applications that are so deeply entagled within themselves that it becomes nearly impossible to refactor out poor performing bits of the app.

It reminds me of the Kowloon Walled City[1] -- originally a (likely cleanly built) fort that took on more and more people until it became a morass of people and buildings with no consistency or strong structural concerns.

Then Rails (and Ruby) get this reputation for being a Kowloon rather than the Fort.

Not that ruby is not slow compared to other languages, but it's not as slow as perceived.

[1](https://en.wikipedia.org/wiki/Kowloon_Walled_City)


I've found Ruby to be faster than Python these days; it's node.js that beats it, for obvious reasons. I don't think JS has fewer problems around mutability and and monkey patching though.


Are you talking about CPython or PyPy? MRI, Rubinius, JRuby?

Perf characteristics differ wildly between those.


On Walled City

It wasn't really cleanly built. And there are lots of stories in there. And the Japanese seems to love it.

I always wonder if someday we could rebuilt the whole thing as Hotel, resort and tourist attraction.


As usual, it's all about tradeoffs. I use other languages too (Elixir, a bit of Rust, Go, Javascript etc). When the tradeoffs match your needs, Ruby is excellent.

But then addressing your points:

- Raw performance is improving (see my other post)

- Monkey patching is less and less frequent (I'm maintaining pure Ruby, but also Rails app since 2005), is generally considered a bad practice

- Rails-centrism is gradually becoming a thing of the past (see gems like Hanami, Roda, Sequel, Kiba, dry-rb etc)

- Runtime metaprogramming brings a lot of benefits when properly used!

- Threadsafety (I'm the first to mention that) has gradually gotten much better

So well - we're getting somewhere!


As I often note when this comes up, I've never actually written a Rails app. I've been writing Ruby for about five years now.

A huge chunk of the devops world is written in Ruby (probably about the same in Go, much less in Python). It's still the "Perl of the future" and you'll find it all over the place. Rails is still big, especially in America--but there's a lot of other very influential stuff to it. If you're doing web there are other (I would say better, unless you're making Basecamp) libraries that do very well, too. Hanami is cool, Grape is good, and I'm in the process of building a Rack framework called `Modern` that's intended to be a turn-key, OpenAPI3-compatible framework to make the creation of API clients on the other hand a trivial experience.


"Refinements" are the "cleaner" version of monkey-patching.

These are (AFAIK) a big part of the magic in Ruby testing frameworks.


Apologies if I'm wrong here, but I don't believe any of the major ruby test frameworks use refinements at all. Since they are lexically scoped they end up being too verbose to be useful for this purpose. In other words, you would need to have a series of `using MyRefiningModule` statements in every single test file.

Perhaps you were thinking of rspec's changes in the last few years. They have been moving away from monkeypatching with some changes to their syntax. Previously they defined `Object#should` and handled assertions using syntax like `my_var.should eq(1)`. The new approach, which does not use refinements, is to handle assertions using a wrapper method that works like `expect(my_var).to eq(1)`. No refinements necessary for that though.


The Rails-centrism is only a western phenomenon, in Japan the focus for using Ruby is a very different one. http://engineering.appfolio.com/appfolio-engineering/2017/5/...


Ruby developer in the west here. Stopped using Rails in my professional and personal work pretty much immediately on starting my career (not saying that I wouldn't use it). My last web projects were all Jekyll.

99.9% my work involved (past tense, I'm a Systems Engineer now) ETL processes and Ruby is a serious workhorse there. I've done extremely complex jobs in plain Ruby, the Kiba DSL or just using Ruby to augment/encapsulate shell commands (ruby script in crontab->curl wrapper[typhoeus] to fetch data & write to file->%x(some grep/sed/awk stuff)->do more work with output in ruby).

I've never found better tools for doing this kind of work. I've seen some _nightmarish_ Node or Python projects to do similar. It's very, very rare that Ruby and its performance are my bottleneck doing this kind of work. Usually it's platform (ahem...Salesforce) or database limitations. GNU Parallel fills in the gaps usually.

Ruby is the only language I've found where I can consistently hand off my work to others. They comment on the code being clean and they keep it clean with their changes.


Just to add a funny story:

The place I just got hired by is pretty much all Python. They half-joke about hating Ruby. I did all of my challenges/problem walkthroughs in Ruby because I felt like it best communicated the problem and problem solution in code.

The number of times I heard "that solution is so clean" or "this solution would be way more involved in Python" from my interviewers would take more fingers to count than I have hands.


An example of a problem where Ruby significantly outshines Python would be welcome.


> - Little to no real innovation in the last years

You are sooooooooooo wrong it isn't even funny. I'm stuck coding to Ruby 2.2 and Ruby 2.5 is looking more and more like a totally different world.

> - Mutability

Never understood this complaint from people. I just recently started playing with a language that supports immutability (Kotlin) and it is weird to me that you guys seem to be asking for constants, rebranded. Even weirder is if I write my vars' names in CONSTANT_CASE I get yelled at by the code formatter!

Plus, in immutable languages, are there cases where immutability extends to nonscalar values like arrays or hashes or objects? That distinction just feels so...arbitrary.

Is data changing that big of a problem?


> Is data changing that big of a problem?

Yes, but it is hard to recognize it as such until you've experienced the alternative.

I don't know much about Kotlin, but the fact that it relies heavily on Java libraries is sufficient to tell me that it is a particularly poor example of the power of immutability. You see the usefulness of immutability when you have objects that are deeply immutable - when you pass them around, you know those functions haven't changed them. It is much easier to reason about deeply immutable objects than mutable ones.

> in immutable languages, are there cases where immutability extends to nonscalar values like arrays or hashes or objects

Yes, this is largely the point. Although, you will rarely see an immutable array or hashtable - more likely it will be a linked list or balanced binary search tree.


Kotlin runs on JVM but it's closer.to Scala, not Java. You can sure have immutable structures all the way down in it, as if it were Haskell. It doesn't enforce this style, though.


You're probably right, Kotlin's version of immutability is perhaps not the best example -- pretty sure the whole var/val thing is language sugar which explains why it doesn't work for nonscalars.

Makes sense, I can see the appeal now. Thanks


> Plus, in immutable languages, are there cases where immutability extends to nonscalar values like arrays or hashes or objects?

Yes, see (among others) Erlang or, if you like static typing with your immutable values, Haskell, both of which are pervasively immutable.


This is why the `deep_dup` and `ice_nine` gems live in most of my projects.


The last 3 years of development to the language have been...momentous, to say the least.

Even if you just look at what's been done in garbage collection alone, it's leaps and bounds.


The "why immutable" question usually attracts zealots that throw around terms like "easy to reason about" and detractors that mention "slows code down and is harder to write"--as if those statements were inherently true, universally understood, and defined in an agreed-upon way. None of those are the case.

The usual argument in favor of immutability is not that mutability is a big problem in general, but that when unexpected mutation occurs, it's a really hard to find (debug) type of problem.

If you have a well behaved codebase that does "a = { :foo => 'bar' }; render_hash_to_browser(a); store_hash_to_database(a);", you're fine: you can assume the same data ends up in the browser and in the database.

However, if that assumption ever fails ("users are seeing data that isn't in the database!") it's often a debugging nightmare, because you have to dig down into any code that might possibly ever touch the hash to see if it's accidentally mutating it. This includes utility functions, totally unrelated areas, et cetera.

If you have a good debugger to assist you, and an easy way to attach your code to a harness that follows bug-reproducing paths, you're set. But those things are a) never as present as we want them to be ("just don't write bugs, duh!") and b) tend to be harder to get working in old, large, monolithic projects--just the kind of projects where sneaky mutation can become a frequent issue.

Now, all debugging is hard. And a lot of it involves delving into code you've never seen/touched before to find a bug. But sneaky-mutability issues typically expose a much wider variety and volume of code as suspect. Because of that, mutability issues often tend to be, if not objectively more difficult, at least objectively more frustrating to debug.

With this as in all things, there's a spectrum. Some folks are fine with ultra-dynamic languages that let you mutate/bind anything, and protect against that with convention and tooling. Hell, some folks are fine with languages that mutate their data structures when you do nothing but read from them (https://perlmaven.com/autovivification). For others, they just want control over what you can bind to names (a type system, or "const"--the Java/etc kind--which prevents rebinding). Still more folks want the ability to opt into truly immutable deep data structures. And others want everything in their platform to be immutable when they use it, and have all the mutability (which you nearly always need, if only for performance alone) handled behind the abstractions of their runtime.

However, past a certain criticality or size (of code/complexity/age/number of developers threshold) of a project, it often becomes necessary to move "up" that ladder to something that provides more programmatic/automatic assurances about "what is happening to my data as it passes through all this code?"


* Raw performance

Last time I checked I think that Ruby is faster than Python, and people generally seem to be okay with Pythons speed so I don't really think Rubys speed is as huge an issue as people make it out to be. Also, if this project that the OP mentions eventually realizes it's potential Ruby could become substantially faster than it is today. It will never approach JavaScript or Lua speeds, but faster is always better.

* Runtime metaprogramming from hell, * Monkey patching

Isn't this really the users fault though, just because the language allows bad stuff to happen doesn't mean we should ever do bad stuff.

* Little presence outside web development

Do yo consider NetSec to be web development? Because I've noticed Ruby is fairly active in NetSec but it doesn't get talked about a lot. I've also heard that Ruby is used extensively in embeded devices in Japan but due to the language issue it isn't known or discussed anywhere else. I'm sure there are other weird little corners as well that Ruby is active in but that just aren't really know.


> Last time I checked I think that Ruby is faster than Python

Really? When did you look? PyPy has gotten pretty stinking fast.


The stack traces suck sometimes with the metaprogramming. I have never chased down exactly what causes it, but sometimes you'll have lots of library code calls mentioned, and nothing from the application.


I agree with every point here except for clean syntax. You can make Ruby's syntax almost as nasty as you want to. I think syntax flexibility is simultaneously one of Ruby's best and worst features.


All those positives sound like python.


That is something to be proud of. I've always considered Python one of the most mature and powerful languages out there. If Ruby has reached that level, I'm honestly very happy.


I think it's on a right track to be quite something, which is why I wanted to try it out & mention that on HN.

It's interesting to note that on the "classic" Ruby, there is work ongoing in 2.6 to accelerate it quite a bit (MJIT). The current PR for this is there:

https://github.com/ruby/ruby/pull/1782

Even before that, Ruby is getting faster, you'll get running times from 2.0.0 to 2.5.0 for a little benchmark I made here:

https://github.com/thbar/kiba-ruby-benchmarks


Have you tried Crystal? I've been very impressed by it.


Yes - I've tried & I like it. I used it for some specific bits. That said so far, I've been unable to properly port the gem I mention in the post (Kiba) to it, despite a couple of attempts.


What kind of speed are you looking for? Speed of the application in production, or the speed at which a developer can implement it? What will cost you more, a few more servers or a few more programmers?


You can't always add a few more servers. For example, let's say you have a per-request processing budget of 100ms.

The efficiency of the generated code for your language of choice effectively limits the amount of (non-concurrent, single threaded) code you can run while processing said request.

It's exactly this limit that eventually leads engineering teams at your Dropboxes, Facebooks, Twitters, Tumblrs, etc. to start investigating alternatives or alternative implementations of their favorite dynamic languages (PHP/Python/Ruby -> HHVM/Pyston/?).


I work for a company exploring Rust and Go at the moment and it turns out there's little to be gained in moving away from Ruby for web app performance. Go + Gin is only about 2x faster than a modern Ruby + Sinatra + at "serialize some stuff to JSON and respond over HTTP" and JRuby closes the gap even further https://www.techempower.com/benchmarks/previews/round15/#sec...

There are cases where Ruby is obviously very slow compared to Go or an AOT langauge like Rust but serving dynamic web requests isn't really one of them.

This wasn't true when Twitter moved away from Ruby 1.8 but a huge amount of work has gone into Ruby since then. For example: totally new VM, GC, threading system and there's currently an open PR to add a basic JIT compiler.


Those Sinatra numbers look a lot better than I remember them being!

But I wouldn't give that much credence to that, there's barely anything happening in those benchmarks and it seems to be testing things that have had a lot of optimization effort thrown at them. Try adding session management, user permissions, large data models, some business logic, all without putting a lot of effort into optimizing (ie like most work environments) and try again. Also, yeah, use some heavy (extremely popular) gems like ActiveRecord.

And God forbid you actually do something computationally intensive anywhere. Serialization (even with drivers written in C) is also very slow if you have a lot of data. And if you have enough traffic, even a 2x performance difference can be pretty important in terms of cost.

AR and similar libraries are the killer of performance in the Ruby eco-system. It's not the bare web framework itself, but the entire eco-system is generally not too worried about performance compared to other languages (partially because it's harder to optimize in Ruby, partially due to the culture).

That said, the language has gotten a lot faster since the 1.8 days, eg GC has seen a lot of improvement, but it's still damn slow.


Until recently none of the Ruby setups in the TechEmpower benchmark were configured even vaguely correctly. Previously they were comparing Sinatra running on 8 cores to Go running on 40 cores.

At my day job have an a prod analytics service with all the stuff you mention written in Rust + PG, and the Rust bit which handles data ingestion via HTTP and JSON is only about 5x faster than the equivalent Ruby code. Obviously, that was a big disappoitment to me.

Serialization performance in the default Rails setup is horrific but calling Oj explicitly on an ActiveModelSerializer instead of the Rails "render :json, @model" brings it much closer to "fast" languages than you'd expect, improving performance about 74x. I wish I was joking.

If the perf difference simply means buying 2x the servers then as a company you have to be pretty big before it's worth throwing out the entire Ruby/Rails ecosystem and getting stuck with far less mature libraries in newer, less widely used languages like Go or Rust.

There are tons of major sites using Ruby/Python for their web layer. Instagram, Stripe, Airbnb, Shopify, HotelTonight, GitHub, and even part of Netflix is powered by Rails.

If you look at the work Aaron Patterson and Shaun Griffin have done and are doing on AR, or what Peter Ohler has done with Oj, I don't think it's fair to say that either the Rails or Ruby ecosystem doesn't care about performance. They just don't put it before usability by default.

Compared to other scripting languages, even performance focused ones like Lua, Ruby's performance is very good, and will only continue to improve with MJIT coming to MRI Ruby, and an entire new high performance alternative implementation coming with TruffleRuby.


At my day job have an a prod analytics service with all the stuff you mention written in Rust + PG, and the Rust bit which handles data ingestion via HTTP and JSON is only about 5x faster than the equivalent Ruby code. Obviously, that was a big disappoitment to me.

5x isn't too shabby for something potentially somewhat IO-bound. Presumably the effort wasn't 5x to build and maintain that, so if that's a very hot inner loop of something that's great.

What I've found is that using Ruby (mainly gems) idiomatically can just lead to way slower performance. Eg I use time libraries a lot for my day job, and the idiomatic way to write the code had tons of hidden costs and was basically impossible to optimize.

So I wrote a ruby extension in rust, and got a 30x speedup on that code - it was pretty computationally intensive, so a perfect fit for this. Part of it was that I was forced to use more primitive types, but in Rust-land I was able to add types all over the primitives and abstract away to my heart's content, without any overhead. That's just something that's impossible in Ruby - each abstraction always has a cost and there's no way around it.

If you look at the work Aaron Patterson and Shaun Griffin have done and are doing on AR, or what Peter Ohler has done with Oj, I don't think it's fair to say that either the Rails or Ruby ecosystem doesn't care about performance. They just don't put it before usability by default.

Right I wouldn't say nobody cares about performance, some people care a lot and they've been doing great work and helping raise the bar. But overall, the bar is pretty low in the Ruby world.

Chris Seaton (one of TruffleRuby's main authors) has a talk where he shows some of the batshit insane code you see in gems, which they had to figure out how to optimize in TruffleRuby/Graal. Code like allocating a new array and calling the `min` method in order to find the smaller of two values! And that was being called in a hot inner loop! In a gem with otherwise great functionality, a nice website and thousands of users. That's the kind of thing you only see in Ruby-land.

Also, using method_missing (gotta have that pretty DSL, performance and greppability/discoverability be damned) and monkey-patching/alias-chaining/etc is still common unfortunately (though a smart JIT, ie Graal, can apparently handle that).

I am really excited about TruffleRuby and it does really amazing things to optimize crazy Ruby code, so I'm looking forward to Ruby being as fast as say JS in the future. Can't wait for the day I can turn on TruffleRuby in production and halve the AWS bill.


Both are CPU bound on de-serialization. It’s basically DB replication over HTTP plus some business logic at either end. The Sinatra version is built in a weekend which is 5x slower has authentication and does use ActiveRecord.

Building an entire web app, with background job processing, in Rust with the ecosystem how it is at the moment is very, very slow compared to Ruby, Sinatra and Sidekiq.

When we started Diesel couldn’t join to more than one table. Probably approaching 5x in development time. It’s just not mature enough yet. We’re still experiencing stuck threads which don’t timeout.

Sounds like you found a great case for a native extension!


Agreed, I wouldn't do a production web service purely in Rust yet, the ecosystem is still way too immature.


It's exactly this limit that eventually leads engineering teams at your Dropboxes, Facebooks, Twitters, Tumblrs

True, but those are such extreme edge cases. 99.9999% of projects will never see that kind of demand for scale and speed. Facebook shouldn't switch to Ruby, but that doesn't inform whether Ruby would be viable for many (most) projects.


I don't think it's that much of an edge case. If Rails and its ecosystem lead to 300ms render times without load, but your objective is 100ms, you're going to have a problem. I don't think either number here is extreme and I've encountered that sort of situation on previous projects. It's not easily solvable by throwing more machines at it, unfortunately.


If Rails and your ecosystem are pushing 300ms renders you have bigger problems on your plate


I don't think it's uncommon. But the point is it usually has little to do with load and isn't easily solved by scaling horizontally. Sure, you might be able to gain some efficiencies with distributed caches and such, but that still requires a fair bit of effort (the developer time we're trying to minimize).

You're right though. And I managed to sidetrack myself a bit. My intention in participating in the thread was to point out that you can't always optimize for developer speed and think you can get out of it "cheaply" with more hardware. And it's not just a problem when you're big enough for it to be a good problem to have. In those cases you're going to need to find a way to make the application faster.

Maybe it's aggressive in-app caching; maybe it's sitting there analyzing application profiles; maybe it's a rewrite in another language or another framework. Our goal is to provide a fast enough runtime where you don't have to make that decision.


> the point is it usually has little to do with load and isn't easily solved by scaling horizontally.

My point exactly. Unless you're getting slammed with traffic, there's too much going on in a page load if it takes 300ms to spit out.


If you can achieve the same rendered output much faster with a different runtime or with a rewrite or with a different framework, I don't think you're looking at a fundamental design flaw with the page. I'm happy to agree to disagree. Maybe there is too much going on with the page. But it's a common problem with Rails apps and is ideally solved without a rewrite. I think a faster runtime would be the ideal solution here.


At my day job we've seen load times for pages > 5 seconds in rails. There's no way to do it faster because the data has to be dynamically generated and loaded, and that's the bottleneck.

It's not uncommon to see this in rails applications — GitLab had the same issues.


In GitLab's case, a lot of their issues are just the classic self inflicted N+1 queries and poor planning of how to represent events in a way that could be quickly queried for display.

OTOH, the background job memory issues were completely not their fault. Glibc malloc trades high memory usage for keeping both high throughput and simplicity. With Sidekiq or puma this causes much higher memory usage than necessary. Switching to jemalloc, which they did, can reduce memory usage by 50-75%.

What does your app do that takes 5 seconds? Is this because hundreds of DB calls are required, or you need to serialize thousands of objects?


Out of pure curiosity what took 300ms? Image composition? Huge JSON payload serialization?


I've seen numerous big rails apps with times as long or longer, even after decent effort has been put in to optimize. Generally a case of having a lot of things on a page and rendering it all into html server-side (the classic/old rails way of doing things).


300ms from server side rendering requires something self inflicted. Even bulk JSON endpoints should be faster than that.

27ms is the average dynamic response time for Basecamp, which is server side rendered HTML done the classic way.

I worked on a high performance Rails API serving dynamically generated map tiles and our response times were actually lower than that.


Doesn't basecamp do a ton of caching to achieve response times like that? There are certainly techniques to get there, but how long does it take to populate the caches? That's what you're going to see out of a stock Rails app.

Self-inflicted, sure. But in my experience it's the result of picking developer-friendly tools to optimize for developer time. Erubis is much faster than HAML, but a lot of developers prefer the latter. Those sorts of decisions accumulate.


No more than Match.com do, who run on .NET. Basecamp just uses the standard Rails caching AFAIK.

Templating just isn't the issue now that it was in the days of 1.8. HAML is slower than ERB, but usually neither have a meaningful impact on response time on modern Ruby. ERB is approaching Erubis performance now.


I worked on a high performance Rails API serving dynamically generated map tiles and our response times were actually lower than that.

Sounds like an ideal case for fast response times honestly. I've seen rails endpoints like that aplenty. But there also exists a lot of stuff taking hundreds of ms on a cold cache, and it's the kind of thing where the same effort done in the same way using another language ecosystem would just be 10x as fast.

If you've never seen this, perhaps you've never worked with a large monolithic rails app with a few years of legacy code built up? Brand new API endpoints can be a lot faster of course.


10x faster in, say, Go? Are you sure? Have you benchmarked it? You might be surprised.

Lets pick a weakness of Ruby: tight loops and lots of math. Take 16,000 GPS points and calculate the geographical distance to another point.

On my machine it takes 12-14ms in Ruby and 4-6ms in Go. That's only a little bit more than twice as fast! Almost certainy not worth re-writing your entire app for.

I've worked with a bunch of large, legacy Rails apps. In between a couple of Rails jobs, I worked on a large .NET app, which didn't have much better performance. One service I helped migrate from Rails to Spring/Java for an enterprisey client actually got slower overall, despite moving to the JVM.


These are the response times for Discourse, a complex open-source Rails app. Even the slowest endpoint has an average response time of 46ms.

Percentile: repsonse time (ms) categories_admin: 50: 17 75: 18 90: 22 99: 29 home_admin: 50: 21 75: 21 90: 27 99: 40 topic_admin: 50: 17 75: 18 90: 22 99: 32 categories: 50: 35 75: 41 90: 43 99: 77 home: 50: 39 75: 46 90: 49 99: 95 topic: 50: 46 75: 52 90: 56 99: 101


Well the idea of TruffleRuby is you get the same speed of development, but also get high speed of the application in production as well. So you don't have to pay more for either.


You should also add: speed of maintenance, fixing bugs, refactoring. But these aspects are not part of the Ruby propaganda of course (which I myself believed in for too long).


It always bothered me that Ruby isn't used as much as Python is used outside of web development.

I think Ruby is a nicer language than Python is in many ways.


Lots of operations code was written in Ruby. Just look at Chef, Capistrano and Puppet.

That whole field wants to be Google though and thinks they have Google's problems, so now it's mostly all Python and Go. HashiCorp has written some great stuff at least.


> That whole field wants to be Google though and thinks they have Google's problems

It's becoming a recurring phenonemon, isn't it? I feel the same with frontend web development: everyone wants to be Facebook.


preaching to the f'ing choir here mate.

The volume of material out there written to help people improve the performance of their React applications makes it pretty clear to me that folks aren't looking before they leap.

I've seen more than a couple dozen projects with ~20-40 lines of simple, performant, vanilla JS turn into multi-hundred line behemoths + _an entire React framework_ for exactly zero benefit to end users because of "best practices".


> That whole field wants to be Google though and thinks they have Google's problems, so now it's mostly all Python and Go

Python has been more relevant in operations work for quite a while, and for a lot longer Google has been talking about their use of it. (speaking as someone with approaching 2 decades experience on the sysadmin/operations side of work)

Readability and ease of learning were big bonuses, particularly given sysadmin types were historically more likely to fall in to the work, and as likely to not have formal compsci backgrounds. That it also made life a heck of a lot easier to interact with APIs was also invaluable. Perl used to be an incredible pain for API work (I learnt python and wrote automation around APIs in less time than it took me as an experienced perl programmer to write similar tooling in perl.)

You also have to remember that python has perl's advantage, it's everywhere. Every major Linux distro has shipped with it for a long time. I'm rusty on that specific aspect historically, I would guess it's probably due to Redhat and all their automation being historically python based.


* Ruby is a nicer language than Python.

Not only that, it's cleaner, more flexible, and faster.

It's a shame more people don't think for themselves rather than just jumping on the PR bandwagon. But that's life.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: