Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why we're sticking with Ruby on Rails (about.gitlab.com)
236 points by lutrinus on July 8, 2022 | hide | past | favorite | 194 comments


> You need a fairly sophisticated DevOps organization to successfully run microservices. This doesn't really make a difference if you run at a scale that requires that sophistication anyhow, but it is very likely that you are not Google

Correct. Don't use tools aimed for Google levels of complexity if you're not dealing with Google levels of complexity.


"Don't use tools aimed at Google levels of complexity" is a take that has become quite popular, but it's definitely lacking nuance. Tools are pretty multi-faceted and rarely have a single selling point. For example, Kubernetes solves a whole boatload of general service orchestration problems in a unified way; perhaps it's true that you only ever need this at Google scale, but you can certainly benefit from it at almost any scale where it runs. (Which is a lot. k3s is pretty light.)

Does that mean you should write microservices for your toy? Probably not. But the trade-off calculus for microservices is not scale. It is a lot of things. If you had an application that was extremely trivial to run as decoupled services, and the boundaries between them are not liable to change much, maybe you really should use microservices from zero, and benefit from better failure isolation among other things. But yes, probably not.

Still, I think the idea that microservices are not really a great idea is a flimsy justification for Ruby on Rails in general. It's fair enough to say that if your language, framework and library choices are working for you and your product, you aren't beholden to justify them really. That said, that doesn't mean they couldn't be better with different decisions, and it's sincerely hard to determine that. Sometimes, I get the feeling that articles in this vein are just as much about convincing the authors that they made the right choice as it is convincing anyone else. But it's not really that important, since I don't really feel you actually need to convince much of anyone, if you like your decision.


> Sometimes, I get the feeling that articles in this vein are just as much about convincing the authors that they made the right choice as it is convincing anyone else. But it's not really that important, since I don't really feel you actually need to convince much of anyone, if you like your decision.

They don't need to convince us random HN readers, that's true.

But they do need to convince new hires, potential clients, and internal stakeholders that Rails is still the right choice. If you've ever worked on a large Rails SaaS app you know this subject comes up a lot.

Now when the subject comes up they can just send folks a link to the blog post, it's very convenient to have around for this purpose.


Slight correction, you can't run tools aimed for Google levels of complexity because Google doesn't release the tools they use to handle their scale. They don't use k8s to manage their infrastructure, they use Borg. K8s doesn't scale to Google's needs. Also, nobody's running at Google scale.


And nobody should be.

We’ve talked many times before about how ridiculous their amount of hardware is. There are really only two arguments for “why” that make sense to me: One, they are succeeding in spite of a very large degree of excess hardware. Two, all that hardware is about advertising.

Schadenfreude wants it to be #1, but I suspect it’s actually #2. And I hope that some day - soon - “overtracking your users” becomes a hot button topic in consumer safety, maybe not as serious as industrial pollution but pretty close.


That said, gitlab.com itself runs on K8s.


k8s isn't only for microservices. You can containerize up normal applications (12 factor or otherwise architected) and run them on k8s.

Is it a good idea? Depends.

On the plus side: You have one underlying runtime and orchestration layer, with a lot of useful primitives (including and especially around deployments).

On the negative: You introduce yet another layer of complexity and abstraction.


You just described the counter to “don’t run things designed for Google if you’re not Google.”

The real advice is “don’t run stacks that make trade-offs or give up guarantees to be able to scale to a size you’re not at or solve problems you don’t have.”


The systems in kubernetes for horizontal auto scaling are extremely nice and work very well for our large monolith. At this point I can't imagine deploying any system with significant batch or online workloads any other way.

We switched to k8s early on in the company and that little bit of work to get it operating with HPAs, etc has paid off in dividends over time.


YALCA


Anyone can have Google levels of complexity, it just requires a dedicated engineering team.

Having Google levels of traffic is another thing, but it's easy enough to get Google levels of complexity.


Traffic is part of Google level complexity. The things you need to account when you are dealing with 1 million requests per day is significantly different from 1 million requests per second.


serverless environments do not carry the devops overhead involved with kubernetes or other container orchestration.


For non-trivial workloads serverless does still require a lot of orchestration and domain knowledge of cloud-specific features and constraints. Kubernetes is for sure its own beast, but its a ton of work regardless (for gains that at the end of the day aren't what you thought they would be...).

Serverless is amazing when you use it the way its intended. Its not there to replace your entire ec2 workload.


your assessment would've been correct 3 or 4 years ago but there's just no need to scaffold ec2, mysql db inside a vpc. You can upload your environment via docker and start serving serverless. You can then use serverless db that is 90% cheaper and far less hassle than scaling up or down ec2 instances/rds, configuring vpc, setting up nat gateways etc

my view is that if you are not on serverless, you are wasting resources that could otherwise been dedicated to building out domain knowledge

our team focuses on optimizing each function and less on devops/scaling. the plumbing work is gone and all we do is now translate business requirements. security concerns are also largely removed.

its a great time to be on serverless, it had to overcome lot of doubt but it is here to stay. no need to pay $5 / month either for your weekend projects. even if your project hits front page, it will still work, the billing is not even that bad for what you are getting.


> serverless db that is 90% cheaper

> building out domain knowledge

What about all the knowledge that is contained in applications, services and libraries that run on RDBMs?

I'm serious, one of the reasons I shy away from serverless is I want to put together components at a higher level than functions. But maybe I'm missing something?


What serverless hosting are you using? What tech stack do you run on it?


> configuring vpc

What advances in Serverless platforms like AWS Lambda prevent the need for configuring VPCs?


I do like me some serverless.

That said, for my own projects I don't go that route because when it is my own money I need the certainty of cost that something like a $10 droplet gives.

I know serverless will be much cheaper (probably almost free) for most likely loads, but if I'm hit with the extremely unlikely loads I'd rather a small droplet that struggles and dies than a performant serverless system that spins up my bill as well as my resources.


How do you configure your serverless environment? I set up my company with ECS a while ago, and in order to get the config into version control I had to put together a somewhat messy collection of bash scripts that use the AWS CLI to push up JSON config files. I couldn't find any standardized way to configure the whole environment: at least at the time, it wasn't possible to do everything in CloudFormation.

I haven't gotten into Kubernetes at all, but from what I've heard the bespoke-bash-scripts problem is largely solved in that world, so I've been toying with exploring it for our next project (still hosted on Fargate).


ECS fargate is a great platform but the API interface to it is pretty bad. It's such a shame.


Terraform or Pulumi? Using CloudFormation or APIs directly is an exercise in frustration IMO.


True, although there is a dev overhead and there are other issues (cold starts, unpredictable bills, zero control of the runtime, vendor lock-in, etc).

Serverless is not a silver bullet.


As a person running a service, I only care about fairness of resource allocation if my service isn’t getting at least its fair share. As an ops person, you know that if I am not getting my fair share it’s because someone else is misbehaving. As a functioning adult, you understand that if you are getting more that you should, it’s coming at someone else’s expense. Because fairness isn’t being enforced.

A lot of complexity we add to these systems is about fairness. Fair is always about narrowing the gap between best and worst case run times and trying to keep the result close to expectations, and being able to defend it when they don’t match.


coldstarts - not an issue anymore

unpredictable billing - this is why you setup billing alerts and not an issue unless you hit HN front page

vendor lock in - you can take your functions and upload it to google, azure, cloudflare


This is why I hate articles that justify the lack of scalability with their execuses to justify complacency.


Good for them for sticking with Rails I guess. I am just not sure who asked or what the value of the article is supposed to be.

Sure it is interesting to see the historic reasons for Ruby but these days PHP is a very different language with an arguably much better optional typing story than Ruby. The community has matured quite a bit and there is lot's of people doing "enterprise-level" work. The whole comparison doesn't hold true for modern PHP.

In fact it would be interesting to reflect on the promises that Ruby on Rails made. Approachability and developer productivity it absolutely delivered but "not messy" ugh not exactly. It requires quite a bit of discipline and experience for projects to not get messy.

I don't mean to say Ruby on Rails is a bad choice. If you are invested in the ecosystem there is no reason to change (except when you want something like Elixir maybe). On the other hand other languages have long caught up and have their own rails-style frameworks that are not much worse. It is not that much of a unique selling point anymore.

Fully agree on the microservice part though.


You can stick with something for the wrong reasons just as well as you can shun something for the wrong reasons.

Is Java hard to use? I don't think so. Is PHP messy? Before the restructuring that happened from PHP 7, not so much. In fact, it's pretty good now.

Are Ruby and RoR easy to use and well structured? They can be, if you are careful and stay on the well beaten path.

As always, for small applications, RoR is a breeze to use. But once you leave toy land, the heavy amount of helpful magic comes around to kill you. Oh you thought you were calling this function with that name in that file over there? Nope, magic code loading decided to pass your call to that implementation instead. (a.k.a. monkey patching is evil) Let's not talk about dependency management and the constant shuffles due to abandoned gems. Let's not talk about nasty upgrades from one major release of Rails to another.

But of course, I am being mean. The truth is that we, developers, make a mess of anything and when it becomes unbearable, we blame the tool and move on to another tool that grew from the ashes of other tools we abandoned previously.

And hence, things like Laravel are born and grow out of the lessons learned with RoR (among others).

As for Java, my Lord, the archeology of web frameworks based on it makes the archeology of life on earth seems simplistic. I survived servlets, JSPs, Spring, JSF, Struts, GWT, yada, yada, yada, generations of attempts at getting it right, growing on the putrefied corpses of failed attempts at not getting wrong.

I think it's progress at work, but sometimes it looks like a pendulum.


And then there's Rust and Go, substantially less insane for the moment.

But Node.js, sweet node, how I love and detest thee.


Haha you right. I love how node is slowwwwly becoming the new php. Where everyone say is crap and awful but half the internet runs on it.



I think people think about microservice because they themselves failed to achieve modular monothlic.


It always baffled me to hear:" Oh we're gonna do microservices, because our monolith is a total mess and we're very unproductive". If you can't organize a monolith, what makes you think you will be able to organize microservices which are vastly more complicated to work with. How about you learn as a company the proper skills and habits to achieve logical modularity within a single process before you try doing that with multiple processes.


I’m probably the last person on here to discover https://grugbrain.dev/, but I like the particular quote:

“grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too”


And I would also add:

The reason your monolith is a mess is because your organization has low standards on code quality and doesn't value it. It is not willing to invest in it, and probably your department didn't acquire the skills and habits to produce clean code in cheap manner. The developers who are used to good code usually leave you out of frustration. No one likes to be unproductive, especially productive people.


I wonder if part of why microseconds work for some companies is the hype helps justify the work to management.

> We need to convert to microservices.

Sounds like a cool shiny new approach enabling better scaling and organization.

> We need to refactor this mess.

Sounds like complaining, why is it a mess anyways?

These two things may essentially be doing the same thing, organizing/breaking up the code logically, one just might sound more appealing to management.


I hate how much I can relate to this.


"productive" means shipping code to these people. They are trying to minimize how often someone says "we are blocked on deploying that code"


You would need to provide me with a concrete example, because I've never really had the issue you're describing. I'm willing to bet it's fixable by having better management.


Assume you're having to ship a large single release bundling the code of multiple integrating teams: a bug in one of those teams' code will likely delay the release of the entire bundle: bugs are inevitable and no amount of better management will solve them. With microservices, you add coordination overhead but teams have more autonomy in shipping their own individual bundles.


I must be missing something:

1) if you have a blocking bug or code dependency between two microservices OR two monolith modules, that should of course block until it is fixed and resolved. Microservices do not magically solve that.

2) other code changes or features that are not blocked, can be unblocked by being released in a smaller release ( git rebase those commits to a new release branch on main, and cut, test and deploy a smaller release -- basically gitflow)

3) once the blocking modules are fixed, they should be rebased onto the latest main.

The blocking "effect", even in a monolith, seems like it would be better solved by using a version control system with good rebase support.

Coordination needs to happen for larger changes, and/or the changes need to be smaller.


It just doesn’t seem like that’s scales. The application I work on has a half dozen teams. Releases includes tons and tons of commits soaked for up to weeks. We can’t solve this with cherry picks lol.


Wouldn't it be more sensible to release often and in smaller increments instead? Why can't you do that? You only release one feature on top of master, test it to make sure nothing is broken then release it. The second feature has to merge on top of the new stable git hash once it's ready to be released. And so on... Bundling a bunch of stuff all at the same time seems like asking for misery and weird integration issues in production.


The big advantage of microservices is organizational not technical. You can hire teams and tell them that they are responsible for an API that receives specific data and should return specific data. These teams can operate independently. You don't have to force all the teams to use the same language or dev process. That could be good or bad depending on the circumstances, but at least you have the option with microservices.


That's the theory. What I've observed is that that companies who cannot achieve modularity w/ monolith are often the same ones incapable to achieve modularity w/ MS anyway.

So now you have in practice is a distributed monolith, with brittle, ad-hoc dependencies between services. A symptom is that most features require touching many MS.


This isn't a good idea organizationally though. What matters is the performance of the system, not the performance of its parts, this is kind of a systems 101 lesson.

If you build a car and send each team off to find the best parts in isolation and screw them together, you don't have the best car, you likely have something that doesn't even drive, because the parts don't fit. The performance of a system is in the interaction of its parts, not the sum of the performance of its parts taken individually.

Teams can never operate independently because it only ever makes sense to improve a part if it improves the functioning of the entire system. The whole problem with the microservices approach it that it optimizes the wrong thing.


> This isn't a good idea organizationally though.

It's absolutely necessary for very large organizations though. The alternative of trying to centrally plan the IT operations of an organization of, say, 500,000, gets to be quickly intractable.

Groups try to do it with frameworks like SAFe and while that may help a little bit, in the end you need to treat different parts of the organization as individual silos that are able to control and execute on their specific mission set within the wider org. Otherwise the complexity of the problem set they're dealing with makes it impossible to do anything without breaking other systems or teams.


The downside to that is that if you get the factoring of the app wrong you now have to refactor across disparate codebases separated by network calls.

It's much easier to start with a monolith and refactor it into microservices later than it is to refactor microservices written in different languages by different teams into a single codebase.


What about basic business functionality, for example, a function to create a site URL, which can be quite complex, and which would be very time consuming and complex and a maintenance nightmare if implemented in multiple languages.

You could, of course, create this as a service. And then make possibly hundreds of calls to this service to generate a single web page.

Or, you can provide a library that does this, and the calls are cheap, reliable, and predictable.

Of course, if it is a service, the caller would locally the cache the information needed to make all the calls, do a batch call to the service, and then backfill in all the links. This then precludes generating and sending the page as you go, and creates the possibility for an explosion in complexity if a simple change in requirements that causes the arguments for the URL generation to be dependent on another service/library call.

Not everything should be a service call.


All those things are quite achievable with a modular monolith. The API is the module.


yeah but deployments and language choice are no longer a single teams choice now. microservices really do allow for a "we did our job" development


Language is always brought in as a "pro" for microservices. I would call it a "con." Fragmentation across languages can be a big problem if your business logic is complicated; suddenly you have to link against code written in another language, or worse, rewrite it.


You can still 100% just use whatever language and build a library, instead of sending json or binary data through network calls, you just use FFI.


Kind of unrelated unless the language chosen is the reason a team can’t do their work?


But they can't work independently. Unless your micro service operates in a vacuum, it's sending generating messages that will be consumed by other services. All of that communication requires a lot of frikkin' work. You need to maintain API documentation, update client libraries, remove garbage messages from the queue, coordinate deployment schedules, and all sorts of other crap. If you have FAANG level resources, maybe you can mock out & automate everything to the point where you don't need to worry nearly as much, but that's a monolithic effort (pun intended).


Do you need services though to achieve this? How about libraries?


Libraries are great, but you can't deploy new versions of library without needing to step on toes. If your dev teams are just "shove this at ops" and done, then it may not make a difference, but if you need to be responsible for the code running in the real world, just library segregation doesn't cut it.


I'm being dense it seems, because I don't see what microservices brings to this. What's the fundamental difference in this regard between shipping an executable with a bunch of dynamic libraries, and deploying a bunch of microservices?


It's mostly about data. Each micro services is responsible for their own data, data migrations, data store, etc. Think of it as a mini web app with an API. In theory, you could use a library in that way but I have never seen such use in practice. You would also run into the problem of having to always keep all library users on the latest version as to avoid having two versions which expect different database schemas. It also largely forces you to stick to a single programming language.


Conversely though, if you can make do with a single data store and data layer (orm or just simple queries) the libraries can use that.

The schema and generated classes could be a shared common library that different teams can work on (maybe with some architecture obersight)

Or if using schema-less each library “knows” it’s own json formats

Then you get nice things like atomic transactions across multiple “services”. You also have no network concerns.

You can pull out microservices for things that need special attention. Such as a spikey-load service that might need to run as a lambda.

Version management is a solved problem. Use your languages package manager and semantic versions.

I concede you need to stick to one language. Ish. You have C FFIs. A .NET app could reference a dll written in C++ or Rust. But yeah not as flexible as services. Although there are inbetween things like RPC.


I can’t get my head around that though.

I agree to split into services where “you’r have to do that anyway”.

But adding services where part A of the monolith needs to use part B where a library would do the job seems odd.

If you are trying to solve a specific scaling issue, then sure, but this is rare.


Yes, and in some cases, maybe you really need some crucial python library but the rest of your systems are in Go, and maybe you discovered a new library that does exactly what you need, in a third language in the most efficient way possible, you can embed a microservice into the mix. I think though, it is usually best to hire for one common language as much as possible, this way your team can jump across projects if the need arises.


Yikes I would never consider introducing a network layer as an organizational tool. Teams that do this are vastly underestimating the latency and general complexity of distributed systems.


The big disadvantage of microservices is also organizational.


I think it can have its benefits but I don't think it should be your entire application. I know some people swear by it, and if that's what works for them and keeps them profitable, you know what? Go for it.

I think serverless is fantastic on the other hand[1]. I had a web application where I was parsing shapefiles to then import them into SQL Server. These files could be massive and take longer to process than they took to upload. As a result I figured that it might make more sense to create a serverless function in Azure to parse them once they were uploaded to blob storage. This was very easy to implement and transparent. If you are using something like Azure and want efficient ways to process your data without doing it all in your monolithic application and blocking your users, or creating a thread in the background which makes me a little uncomfortable, then serverless is a godsend.

I think Microservice architecture if I had to pin where it might make sense would be for APIs. I would then emphasize on versioning each microservice API. If you do a microservice architecture for APIs you can update parts of your API without taking the whole beast down (unless you use something like Erlang or slots in the case of Azure Web Apps, where it forwards new requests to new slot). I think this might be a reasonable use case for microservices. For a normal website though, I think monolithic with some serverless helper functions (if needed) is probably good enough otherwise.

[1] Note: Let's not confuse microservices for serverless, because it's not necessarily the same thing, although some people use serverless to achieve the microservice architecture, serverless functions can do so much more!


The monolithic vs microservices arguments are more about where to arbitrarily draw separation lines in the deployed architecture.

Someone with a monolith that embeds the database could claim those who use MySQL failed to achieve modular monolithic.


Without diligence, monoliths tent to turn into dependency spaghetti. Microservices can force you to spend more time structuring things.


Especially in Rails apps where almost any bit of code can modify anything else.


The main reason I hear when poking more is "so we're not dependent on other teams to release our module".

For some reason, I feel this is somewhat of a scapegoat and fails in practice. But I don't have enough experience.


In any case, that sounds like an organizational problem, not a technical one.


Microservices are one way to help with scaling the organization, not necessarily the product/code.


I'm in the process of transferring from being the CTO of one company to CTO in another one and I'm wondering if I should just start with Rails this time. I get a little uneasy given the popularity of the JavaScript ecosystem, and Rails seems a little harder to share code between the web app and mobile apps, but I don't know... It feels extremely productive to work on Rails.


> It feels extremely productive to work on Rails

It is. Nothing comes close. It's a shame there's such a stigma around it otherwise.

I do think that we're seeing the pendulum start to swing from microservices/cloud/PaaS everything back towards simple architectures and simple deployments and I'm hoping Rails sees a bit of a renaissance with that.

> Rails seems a little harder to share code between the web app and mobile apps

I've personally never seen this done well, ever. At least not in a way that the amount of code that can actually be usefully shared is worth the effort.


>Nothing comes close.

Laravel is as easy and productive.


Mostly work with Django but have some expire with Rails: the speed you have with rails is insane compared with JS. Authentification, Routing, management of state, SSR. It's all done for you.

Mobile App. Start with Service Workers and PWA and decide then if you need an app.


I'm mainly a Django dev, and I'm curious how the productivity of Rails compares to Django. Do you have anything conclusions about their relative productivity, given a reasonable competency in both frameworks?


I've worked with both extensively, though my experience with Django is mostly with very old versions. In my opinion Rails is definitively "higher velocity" than Django (recent versions of Rails have an answer to nearly everything you would need for a modern midsize web app), but I admired how simple Django was architecturally. Stack traces in Django were, at least then, easy to digest and you could comprehend most of the routing system by reading a few files, for example.

Whereas with Rails your stack traces are much longer and there's a lot more machinery each request has to go through. Rails has a lot more abstraction in general (you can blame Ruby's powerful reflection capabilities on that), but I found that I have to go delving into Rails' internals very little.


Hey there! I've done a lot more Django than I've done rails, but have had both in my job title at different points in my career. My favorite language is python but my favorite development platform is Rails. They're mostly comparable but Rails wins for me. Rather than type it all out in prose I'll just give a short little blurb about what you get and don't get with one and leave the rest as an exercise to the reader:

What you get with Django but not with Rails:

- An admin panel out of the box

- Python (which is a big win for me personally)

- Authentication out of the box, but still fairly basic

- A robust library of third party packages that benefits from python being used much broader than the web space.

- Class based views (this is the biggest thing I miss, by _far_)

- Better LSP support

- Django REST framework (rails isn't _quite_ as good but you can get kinda close by leveraging resourceful routing and the serializers that ship with rails)

What you get with Rails but not with Django:

- Websocket support out of the box (but not the greatest performance. A third party drop in replacement [Anycable] makes it much better)

- First party API for background jobs (but it requires a third party background job system to wrap, like sidekiq)

- First party integration of a JS solution (import maps)

- Second party integration of a JS bundler (js-bundling, written by rails authors but not included by default)

- Hotwire (although Django trivially integrates w/ a comparable technology, htmx, Hotwire is still pretty dope)

- Rspec and the rest of the Rails testing ecosystem (guard, VCR, factory_bot, capybara). Python has ports of some but not all.

In my mind, Django adopted just two things, I'd be very happy. I'd love to see a better async story in Django that would enable web sockets and stuff. And the second thing I'd love to see is a better story out of the box for playing nice with frontend build chains. And, as a distant third, I'd like to see an html-over-the-wire technology like HTMX favored and some light integration in Django to recognize some headers so it can switch between sending just template segments or if it should send one w/ the layout included. The testing ecosystem I don't think is possible for Django to lift on it's own and I'm prepared to miss it while I'm away. That other stuff, though, is unfairly aging Django imo.


I'm confused, is this a new company? If not, it seems like changing the tech stack as soon as you come in is a recipe for disaster.


It's a new company, the system is still small and they just lost 80% of their engineering talent. I think what's left of the system doesn't matter that much (but I'll have to check, of course). They are currently on C# + Angular.


Why drop C#, though? Angular, I get. But modern dotnet is quite cool.


Because it seems that Rails is a lot more productive in the end. It's not a C# shortcoming, but rather a Rails merit.


Rails isn't any more productive than any other languages. Unless you are talking about dead programming languages in comparison.

What you need to do is figure out what the application is doing. Identify what the best language is for that task and just micro service it.

Move complicated tasks into smaller services. Don't compound the complexity into a monolith whatever you do!

You can get best resource utilization by containerizing an application. Most people say just multithread it Puma ruby whateves... you will go into coherency hell if your app is not designed inherentlt around the idea of puma. Just use unicorn and run it as a container. Right size it for that scale out accordingly.


My admittedly biased view from the other side of the fence is that it may be a lot (little) quicker initially but you run up against code quality/maintainability and performance struggles far sooner.

Granted in a startup worrying about next week is wasted cycles but I don't think rails is so much more productive as to justify switching away from an existing stack (whether Laravel, Django, .NET or whatever. While Rails was a paradigm shift when it was new most all other languages and frameworks have adopted large amounts of the lessons from Rails). I'd urge you to give C# a second chance


That's a stretch. I guess it depends on the dev team. Everything you have in Rails you also have in ASP. You will probably encounter a hiring problem.


why would you drop Angular though? what would you use instead?


I think Angular has a pit of failure for performance problems. It's use of RxJS is highly compromised (Angular "best practices" are often RxJS worst practices) and that infects all of Angular development and fills the entire ecosystem with memory leaks and other performance problems. (I've blogged about some specifics before and I've built/published an open source library to try to compensate as best I can as I currently work in codebases now sunk cost fallacy stuck to Angular.)

I don't think any of React/Vue/Svelte are nearly as compromised and have nearly as big of a pit of failure. I find React a good option with an active ecosystem. React especially isn't ashamed of and doesn't hide its learning curve like Angular tries to, and that learning curve is especially designed to more often than not lead towards pits of success.


can you link that blog post? sounds super interesting! thanks!

so maybe I'm just lazy but even though I've tried to jump on the whole FRP train (back when it was blowing up around ~2012-2013 .. there was a great tutorial about it and I caaaaan't fiiiiiiind it now but I spent hours going down memory lane thanks to HN's upvote history :D), but I don't really use it or feel the need for it. TS and a nice Result type (a la Rust), the standard Angular Input/Output bindings, dependency injection, a nice template language, and ... things are smooth and easy.

Most of the complexity is fiddling with business requirements, validation (the usual mapping backend data to frontend data), somehow performance was never a problem.

And compared to Angular I spent tooo much time fiddling with props in React, fighting with people and their half assed components, their ignorance of TS, and so on :)


Dependency Injection is its own form of "too much time fiddling with props". It's just implicitly handled "magic" and you don't appreciate the time you spend fiddling with it because it seems "automatic" up until that point you need to spend an entire week unmangling a set of NgModules to fix a code-splitting problem. (Or you don't and watch your bundle sizes increase exponentially and assume "that's just the Angular way: huge bundles".) Which is its own performance management headache.

I do wish that there was a larger culture of TS usage in the larger React ecosystem. But also I'm still comfortable writing my @types/ modules in DefinitelyTyped if I absolutely need to and want to contribute that work to others. That's generally a good recommendation, if you find an untyped React component check the DT issue tracker for it and post a request if there's isn't already an open one. A couple of "Hacktoberfests" I've contributed @types for requests on there, and I know there are others that are watching it more than just once every October or so.

Standard Angular Input/Output bindings are some of my problems with Angular. It's a worst of both worlds situation where some things use RxJS and other things use imperative proxies and the performance weirdness of Zone.js and the weird things it does to Promises and RxJS.

I also personally don't like the template language. I think it was a terrible mistake that the template language uses .HTML file extensions. I think that sends too many designers a false sense of security with the template language and I've had to correct so many things.

> somehow performance was never a problem

Everyone has different performance considerations. I still have a tree-shake and keep bundles as small as possible mindset. I also developed a lot of skills doing performance work in RxJS on Cycle.js projects and redux-observable "sagas" on the React side (and RX backends in C#/.NET). So I notice a lot more things to nitpick than the average casual RxJS user. That's a big reason I keep referring to how Angular uses it as a "pit of failure": it sets up casual RxJS users and junior developers to have a bad performance time, and in some cases not realize that they have bad performance. The biggest for instance are very slow memory leaks that will never impact a developer in the middle of an edit/compile/debug cycle because the app never runs for long enough at a time in that cycle (and the developer often has plenty of RAM anyway compared to the average user), but absolutely will drive end users crazy when they have to hard refresh the tab every few hours/minutes and neither the user or the developer will understand why that performance problem exists.

The blog post is here: http://blog.worldmaker.net/2021/06/26/angular/

This is my open source library attempting to wrangle some RxJS best practices out of Angular component design and its standard Inputs/Outputs/Template Bindings: https://worldmaker.net/angular-pharkas/


Never had a problem with Angular DI. (Had lots of problems with DI in JVM land - with Scala, had endless problems with module loading in python, and so on.)

For sure, I won't clap for how great modern JS bundle sizes are, but I found that most of the time what dependency we use matters a lot more for bundle size than how I load and what and where.

> I still have a tree-shake and keep bundles as small as possible mindset.

I also aim for this. Though just recently on the current project we're working on reducing downloaded content size happened when I finally went ahead and compressed the unnecessarily large background images, fonts. (Plus edited the CSS to prefer the smaller font file.)

So all in all, I think over the years I positioned myself to work on smaller projects where all the biggest concerns were completely outside the frontend library, and I wanted something with built-in TS, and so Angular became the trusted choice :)

Thanks for the link! (haven't got time to read it yet :D)


I'd start by questioning if I need a front-end framework at all.


They lost 80% of engineering talent, baking a custom framework every single new joiner would have to learn would be a colossal waste of resources.


Do you have any references to this? I must have missed the news, something like a cultural blowup internally?



react


Fable is pretty kick ass if you’re feeling adventurous within the .net ecosystem.

https://fable.io/


Rails + angular works well, might be a compromise for you. You could replace the backend endpoint by endpoint... e.g. (or similar to incrementalize it). I mean -- assuming the C# is not something you want to carry forward.


Trading up .Net for Ruby is like trading up a lambo for pinto.

But maybe you need to drive over a mountain pass and the pinto has snow tires on it.


Hopefully when Strada, part of Hotwire, is released latter this year that will take care of the mobile app part of the puzzle.


Isn’t strada a running app? And also the name of Google’s gaming streaming service? And probably 5 other things that have higher search priority on PageRank?

What is Hotwire , though?


I think this answers your questions: https://hotwired.dev/

> Strada standardizes the way that web and native parts of a mobile hybrid application talk to each other via HTML bridge attributes. This makes it easy to progressively level-up web interactions with native replacements.

Strada will premiere in 2022

Hotwire is an alternative approach to building modern web applications without using much JavaScript by sending HTML instead of JSON over the wire. This makes for fast first-load pages, keeps template rendering on the server, and allows for a simpler, more productive development experience in any programming language, without sacrificing any of the speed or responsiveness associated with a traditional single-page application.


Strava is the running app


google's game streaming service - Stadia


Turns out I suck at getting names right.

Like wanting people to send money to me by means of PayBack (bonus points), instead of PayPal, and vice versa at the supermarket's checkout counter.


If you need mobile, consider a Rails backend and React Native/Expo front-end. This is a dream combination for my project which is mobile app first and web site second. I miss some of the rails view magic, but instead we get a single front-end that runs natively on mobile (iOS & Android) and it also runs on web. (It’s not common knowledge that React Native is for web too, it’s just an abstraction on the HTML elements: e.g. View instead of Div).


I’d also look at elixir/phoenix.


Yeah, I wanted to try it as well, but I don't think the team can handle such a drastic change right now (from OOP to Functional). But it looks like an awesome platform.


rails is awesome, and with hotwire we can reuse slices of the application and cut down required dev time. It's certainly optimized for developer productivity and happiness! been doing it for 13 years and still love it.


You can just create modules or dependencies in other languages you know?


Just containerize everything put it in kubernetes or some other solution. Plenty of CI/CD tools that take the toil out of it.


"just containerize everything put it in kubernetes??

Dear lord !! That sounds horrible.


In my experience ruby devs are just a little bit nicer to work with or have as teammates than JavaScript developers. YMMV.


> microservices do nothing to reduce complexity.

This is such a dogmatic statement, showing a very biased opinion here. And it also depends what perspective you have.

The reduction of complexity for developers writing and reading the system, is at the expense of an increase complexity when running the system... Which is a trade-off a lot of companies are fine with.


Also higher code delivery throughout via higher development parallelism. Only so many people can ship changes in a single deployable before stepping on each other’s toes.


But is there a difference ? You can have a monolith and still develop in independent enough components. No need for a complex infrastructure if all you want is modularized code.


Two of my coworkers got into a really nasty fight about exactly this. We had a monolith codebase that was extremely modular, it was all one artifact with multiple entrypoints that might as well have been ships in the night except for a few common libraries to handle logging, tracing, metrics, and connection pooling to external services like mysql and rabbit. But we had one senior developer go on an entire crusade that our app wasn’t modular enough and what it boiled down to was in his mind is that if it wasn’t separate repos, separate artifacts, and a network boundary passing JSON or similar instead of serialized objects between them it wasn’t really modular.


I hate this kind of thought, if we go for his definition, literally every single application could be made '100% modular' by converting every single function into its own service, but instead of an ABI like stdcall you get an ABI that is json over the network.

At the end of the day modularity should _not_ be defined in terms of the interface between modules.


The main difference is the boundaries in a monolith are "soft" and can easily be worked around, changed, abused etc

With a service oriented architecture (micro or otherwise) the boundaries are enforced much more strictly by definition so can't be changed or worked around that easily


Has nobody used static analysis tools to enforce boundaries in CI? We could take the output of a module dependency tool and fail the build if it sees edges corresponding to independent modules importing each other.

Example dependency for Django: https://www.flickr.com/photos/51035630876@N01/4364929942


I asked same at previous Rails shop and my understanding is Ruby/Rails capabilities makes this especially difficult.


Not if you depend on another microservice (which you will). It just pushes the complexity to another layer - and might even increase it (IPC is non-trivial, tooling is worse (no cross-microservice refactor), etc)


I was running a self hosted gitlab instance on my homelab, but got annoyed with rails (and especially sidekiqs) runaway memory usage.

In my own rails projects I’ve helped mediate this by compiling ruby from source with jemalloc, but honestly I don’t have the mental bandwidth to hack into their whole omnibus thing, so I just switched over to gitea instead.


When I was evaluating options for a self-hosted web git service, I ruled out GitLab because I regard such high resource use as a serious architecture smell. Just asking for trouble.

Also went with gitea. Runs on a potato. If you have few enough users you can stick with SQLite to make it stupid-easy to deploy and administrate.


To be fair, gitlab is designed for enterprise-scale deployments not a homelab or even a small team. It's like deploying an openstack cluster when you just need a few VMs.

I'm on the tools team of a mid-size company and gitlab works very well for our 200ish developers. The only real pain point for us is the difficulty of upgrading combined with the high tempo release cadence.


Oh hey, another jemalloc user! We just LD_PRELOAD it.


Caveat. I did not read the article.

---

Our team is based on Ruby on Rails. A bunch of us lament "still being on Rails". We have a few small services in alternative languages. However, everyone we serious consider moving off rails, the list of things we'd lose just grows exponentially. We can never justify it.

Starter list:

* Built in blog/binary storage

* Built in migrations

* Built in models and relationship management. Seriously, it's so freaking easy to define rails models.

* Standards for segmentation

* Gems for just about everything

* Admin panel

----

I hope we'll eventually be out-scaling Rails - but for now I'd much rather pay an extra $100/month for the next server up.


Not 100% related to the topic but as someone using self hosted gitlab daily performance is not its biggest issue but UX is. There are hundreds of small annoyances in the UI from minor details in merge request review interface to the fact that settings ui feels randomly organised. You really need to know where to look to find a setting. Gitlab is very feature rich and powerful but it feels that as it evolved, things in its UI were just glued together randomly.


I don't see any reasoning for Ruby or Rails, just why they are sticking to a modulith rather than decomposing into (micro)services.


what I hear here is "i've never worked with an actually complex application built over a span of decades + & have no idea what it takes to make it into a microservice. I also believe everything I read on every article so every single persons application must be as simple as mine to deconstruct"


It is cringe to see people defend overly complicated code and infrastructure. I agree with you.


As someone who suffered under a Rails-powered microservice architecture: ew. Ew ew ew ew ew ew EW!!


A tiny kubernetes stack would do them wonders.


Because it works for them, presumably.


The hardest part of using Ruby on Rails is hiring, in my experience and opinion. There are lots of Ruby developers, but there's a lot more with experience in other languages.


That's the beauty of it. You need less "experience" with Rails because it curb-stomps Javascript for productivity.


I disagree. It is much easier to make big changes, especially refactors, using a language such as TypeScript. The amount of `undefined method 'example_method' on NilClass` errors in Ruby on Rails projects is astounding. After working with a typed language on the backend, I am confident it is easier to more quickly ship correct code with a typed language, while Ruby on Rails makes it easier to quickly ship incorrect code.


If you don't have static typing, you just need to make sure you have good test coverage. I work in ruby/rails and don't run into this problem.


Sounds like a lot of extra work.


Static typing requires good test coverage too.

Tests confirm correct results. The fact that they also confirm expected types is a free side effect.


I'll work pretty much everywhere with any tech in any domain but never on something significant without tests. You spend all your time putting out fires. It sucks the life out of you.


You might want to check out Sorbet by Stripe if you want that with Ruby. https://sorbet.org/


When I had to work on Java/Javascript, I had to fallback to using `any` so often, to get the compilers to work, I gave up on the idea that strong typing is useful. Maybe it's fine for simple types, but trying to pass non-trivial structures around was way more trouble than it was worth.


> I had to work on Java

...

> I had to fallback to using `any

What? Are you sure you worked with Java?


Are you sure of what you're asking? Yes, the Java side was fine. I frequently had to use "any" on the Javascript side because it couldn't understand my compound types. They parsed fine in code, but the compiler would choke.


not my experience. after three months in a rails skip shop my productivity sucks. It also lacks language appeal (eg i joined current company in spite of rails). Its a great framework to be sure, but if you dont have a pool of rails devs to hire, IMHO id choose something more popular.


Yes, hard to not choose Django when you can find Python developers at every street corner.


All I read was "why we're okay with it being slow".

EDIT: This is also not really a Rails issue per se, but maybe architectural.


Rails being slow is usually not something that the user can perceive. Instead, it's just something that increases your operating costs (more servers).

For most online businesses, the operating cost of servers is small relative to the costs of support, sales, marketing and R&D.

So, yes rails is slower, but it usually isn't slow in a way that is much of a negative for the business.


> Rails being slow is usually not something that the user can perceive.

It’s good that you used the word “usually”, w.r.t Gitlab it definitely feels slow. It’s not just a cost problem.


Rails costs 2-3x more $ than other servers because of its poor performance. I think this is a significant tradeoff to make, as well as trading developer familiarity with the hardest and most dangerous software to upgrade and refactor. Maybe for Gitlab, which is a relatively small and straightforward piece of software with limited surface area, the sting won't be quite as strong.


> Rails costs 2-3x more $ than other servers because of its poor performance.

That's just not true.

The vast majority of time for most web apps is spent in database calls. A typical runner up on time spent is remote system calls, which, if you accept the monolith tenet that maybe you don't need those, are minimised in a Rails app.

If your app does anything meaningful, typically, that "meaningful stuff" so massively dwarfs the part of the time "in Rails" that worrying about the framework time is optimizing the wrong order of magnitude. I base this claim on looking at multiple real-world Java, Node, and Rails apps in New Relic, and during performance testing. Hint: the rails app outperformed both.

Oh, wait--do you mean server costs? If you are talking about 2-3x more in servers, perhaps.

Have you compared dev salaries to server costs, though? Here again, you'd be optimizing a small cost when you should be optimizing the big costs that matter.


> The vast majority of time for most web apps is spent in database calls.

Which is why I wouldn't reach for a webserver that can only handle one request at a time per process


Rails can use either threads or processes for parallelism.


Sure, and Gitlab doesn't, because it's not practical


Well, I developed and now manage dev for a Rails app that has half a million customers, with holiday demand peak times, etc. I haven't lost a night's sleep in the almost 10 years to the app.

I'm interested in what impracticalities are involved in our threaded deployment of Rails?


How do you get Rails models querying the db to use threads and not block the whole process from accepting other requests?


Take a look into the ActiveRecord connection pool: https://api.rubyonrails.org/classes/ActiveRecord/ConnectionA...


Why isn't it practical?


I agree rails costs 2-3x on servers, I just think that tends to not matter in a business context because those costs are small relative to everything else.

What really moves the needle is R&D effectiveness. If your servers cost 2x with rails but your devs also produce 2x, that is a trade that is a good one for many businesses (especially startups and saas companies)


If Rails performance problems can be fixed with money (more servers) then why does everyone complains about gitlab being slow?


In my experience, it's systemic in rails applications due to how many ways you have to shoot yourself in the foot - it "just works" but not in a good way.

"Oh, this route takes 6 seconds to load. Why is that? OH, because it's making ten thousand database calls."


Love rails, hate ruby. A rails framework written in Go would be pretty cool.


I feel like I've read this before July 6th. Is this a repost or something?



Looks like it was previously posted on a different site and only posted to the GitLab blog July 6th.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...



https://news.ycombinator.com/item?id=31726825 posted 24 days ago was the same article as today's post.


Thanks for the correction.


If you want the full modern rails setup: rails7+docker+mysql+ruby3 this will get you off the ground in a few minutes.

https://github.com/james-ransom/rails7-on-docker-mysql


Why MySQL and not Postgres? Most modern Rails apps use Postgres in production (at least according to this survey https://rails-hosting.com/2022/#databases)


You can use the parent fork! It is Postgres. I just personally hate Postgres and have a huge mysqldb i need to support.


Why do you hate Postgres? I think that's the first time I've ever heard someone say that.


I think Rails is great and it makes sense for most companies established on it to stick with it.

But for individual engineers, going all in on Rails is likely a poor career move, in the US at least. It's rapidly becoming a commodity skill and salaries seem stagnant.


rails is fine as long as it stays small and within a container

no monolithic design or throw turds on a wall to see what sticks

fact is K.I.S.S and everything will be fine

rails will just mean it the container will be as bloated as python containers

don't and I mean it don't post deploy load rails... always compile


Gitlab have a self-hosted version of their product too. I wonder if that has influenced this perspective too?


I still think it's funny that the ruby on rails creator didn't have enough experience for a job in ruby on rails in a jobpost that required a temporally impossible number of years experience.


Did that happen to Hannson? The most recent instance I’ve heard of was Ramirez and FastAPI.



It doesn't matter what the backend is written in. What matters is user experience. And GitLab's problem is a total lack of any usability. Bloated and full of inconsistencies.


Nothing wrong with using Ruby, but if this is a justification build something that doesn't scale you're incredibly wrong and complacent. Scale creeps and when it creeps it quickly takes over. You should always build infrastructure with scalability in mind from day 1. If you're writing in Ruby it shouldn't matter if you container it, but what surrounds that application is what matters. It also matters if you build a monolith or not.


Scalability from day 1??? No sir, that is how you end up with premature optimization, k8s,microservice architecture and 3 deployment tools, abstract classes and factories for all those future usecases and dao's

You probably should focus on really understanding the problem your software is trying to solve first, your next few versions(iterations) will be to adjust your solution you thought would work. But you usually only understand the problem and the shortcomings of your solution (software) after the first few version.

All while you probably haven't even considered product-market-fit or talked to a few customers.

Focus on scalability day 1, definitely a big no in my book !


Definitely definitely you don't know anything scalability if you're saying that above. Scalability is about CI/CD solutions that don't rely on bootstrapping everytime a system comes live. It is about focusing on speed and delivery. Because the faster you can deliver the faster a system maintains availability. If you have a deploy cycle that takes longer than 5-10 mins excluding testing you're doing everything wrong.


I think you are redefining scalability here. Tight development/ deployment cycle is just that, it is not scalability. Availability of the service shouldn't be dependent on how fast you can deliver new versions of the application.


You do need scalability as a fundamental component in availability.

Release cadances are important toward maintaining stability and vulnerabilities. Without a maintained application you lose availability.

You are still asking where does scalability fall in? There are several points for building a scalable system.

One is a design on load. So if you work on a ruby application you will know that typically unicorn is single threaded unless you want to do something like sidecart nginx. Not always the best.

Going toward infrastructure with a ruby system. Someone might say oh lets stand up an instance let puppet manage it.

This is where someone just decided to use an orchestration system that typically supports ruby. The flaw in this is that if you are leverage a non-binary deploy and forcing the need to use gems at setup on an application you create a massive potential for drift to exist across your instances.

Suddenly you say I will use ansible to run scripts to verify the host and applications run and that they didn't lose coherency.

The flaw with this is that you just overly complicated your system and likely dug yourself a mile of tech debt.

What you would use is a scalable system be it lambda or containerized solution liek kubernetes.

One of the biggest wins is speed and deploy cycles too. Kubernetes and lambdas are scalable solutions.

So as an application organically grows in load a scalable system will allow organic resizing on the basiss of demand ensuring that it maintains availability.

You should embed one reliability engineer on a team who can also code.


Not sure scalability means what you think it means...


Are you sure there is no survivorship bias at play here? I've seen more projects fail due to overengineering for future scenarios that never materialized than due to lack of scalability.


If you think about planning scalability as over engineering you're greatly mistaken. Probably also a terrible engineer. In fact you are likely to save money with a scalable solution. Because if a particular load is not required and it scales back as well as out with limits it prevents an over exertion of budget.


Normal VPSes and other relevant offerings are pretty cheap these days if you don't consider one of the big three clouds. They will cover most use cases and loads just fine for a long time also and there are practically no autoscaling issues, budget overruns etc. You could of course show some concrete examples from you experience. In my experience, you save the most time and money when you work with a solution that is less dynamic and more general purpose, like a VPS instead of specific scalable services without fixed price. Also, it is quite easy to switch providers with a VPS. With specific scalable services less so.


VPS hosting kubernetes totally fine. VPS that is just a few static sized nodes problematic.

Lets say a node fails over dies kaput. In a proper scalable solution the service would come back almost instantly. You have some sort golden ami and leverage containers no post deploy comfiguration.

Even cheaper Lambda with API gateway. Now entirely dependant on API design but often works well.


Exactly!


Um what are you smoking?


Definitely not "unfiltered scalability day one" cigars


No, you should get the business going first. You can improve infrastructure later.

E.g. start with a VPS. With most offerings, they are pretty reliable. If you need, make a failover cluster of two to make updates with very short downtime easier. You can add load-balancing and downtime-less updates capability later as it will make the whole system quite a bit more complex in most cases that are somewhat interesting.

Of course, if you just have a static website or something more or less state-less, doing the right thing from the get go is way easier. I have more interesting, stateful applications in mind here.


Right !!?? The phrase "experience can't be taught it has to be experienced" comes to mind :p




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: