> Organizing your local software systems using separate processes, microservices that are combined using a REST architectural style, does help enforce module boundaries via the operating system but at significant costs. It is a very heavy-handed approach for achieving modularity.
One of the most effective and succinct criticisms I've seen of microservices: an architecture hack for modularity.
The irony is, it's rarely an effective hack. If you don't get your service boundaries right, modularity goes out the window, and you're stuck with a bunch of network calls that should be function calls.
Yup I've seen it at just about every company I've worked for. A cluster of microservices that is really just a distributed monolith. Is it really a microservice if changing one part of it requires making echoing changes to every service downstream? Is it really a microservice if one part breaking causes the entire system to fail spectacularly?
I feel like the pendulum is starting to swing the other way as companies learn that microservices aren't the end-all solution to their process problems, and that prioritizing new features every sprint and constantly pushing housekeeping, bugsquashing, and system improvements will result in a broken collection of microservices just as quickly as it resulted in a broken monolith.
> prioritizing new features every sprint and constantly pushing housekeeping, bugsquashing, and system improvements will result in a broken collection of microservices just as quickly as it resulted in a broken monolith.
Now, HTF do you convince a C-Level of this truth? Seems like there is never any time for any sort of housekeeping until the house is literally on fire and the whole world is looking at our smoke pillar.
Even then, they'll usually be like "Ok, take 2 weeks to fix all our architecture problems and then get back to work on new features!". As SOON as the fire is doused, they want to move on rather than cleaning and removing the flammable material and the flame that keep starting the fire.
And just like you don't ask permission for writing down a line of feature code, you don't ask permission for maintenance work. You just do it and schedule it accordingly to business needs to the best of your ability. And make sure not to apologize for doing your work or treat maintenance work as "unimportant" because that is how it will be perceived.
It does get tricky when you have a manager defining tasks and you have yet
to determine whether their job is to be a technical manager (and is adequately competent) or a nerd babysitter/translator. In the former case it's good to run maintenance projects by them if they need active, dedicated time as opposed to something you can mostly do during downtime. In case it's the other kind of "manager", and you've already tried and failed to convince them to prioritize basic stuff, you have to just put it inside other estimates and rope in the rest of your team to do the same and/or polish your résumé.
It's tricky when it gets to the point that you need specific tasks to clean things up.
But lots of developers think any amount of cleanup needs to be scheduled. No, it does not. When you build on a feature, you modify the feature so that you can actually build on it in a reasonable fashion.
Lots of developers will, instead of doing that, shove round pegs into square holes. There's nothing about a task that says you have to do it in the sloppiest, shittiest way possible. It's like the plumbers that cut through your floor joists because that's the easiest way to get their drain in.
> You don't, it's not his job to care, it's yours.
Last time I checked it was the job of Project Manager to decide which tasks land on the sprint backlog as he decides the priorities of what has to be delivered.
Literally written in the job description.
As a developer you can create a plan/tasks about decreasing technical dept, but its never your work to prioritize that, coz you dont know whats the priority of upper management..
Unfortunately, us bottom guys don't generally get to pick whether or not the org is bottom up or top down.
Guess switching jobs is always an option, but what I've gone through is basically watching as an org grows it getting switched from being a bottom up to top down org because the C level guys want more control over everything.
In any sane system, technical debt isn't backlogged and prioritized by project managers, it's simply a tax on all work.
The point of "just doing" your technical debt isn't that you're stealing time from the PM, it's that when an engineer estimates their work, story points the next feature and everyone is deciding on the time frame, that process should implicitly include ~20% of time for you to continue to fix and update and improve things.
Most PM's I've worked with do not care about that 20%, in fact, they're happy that we're taking the time to do it. They just want accurate estimates so they can tell stakeholders a realistic time frame and somewhat meet that goal. All you have to do is bake your 20% in and everyone is happy.
Honestly if I worked for a company that micromanaged my time so closely that I was prevented from cleaning up technical debt and fixing things, I would still clean things up and be good at my job and let them fire me for it (or be searching and leave anyway). No PM, no middle manager, and no executive can ever make me sacrifice the quality of my work. They can only replace me with a monkey that won't have the same issue.
It's the job of the PM to decide what to build. It's your job how to build it, which includes taking time for an appropriate level of quality.
It's easiest to improve the codebase where and when you alter it for feature work. This is also a good control for yourself to stick to relevant refactorings. This should be your professional autonomy.
The remaining like, 5 to 10 percent of the time that you need 'dedicated' tasks, should be explained to and supported by the PM because he trusts you due to having a track record of delivering.
> It's the job of the PM to decide what to build. It's your job how to build it, which includes taking time for an appropriate level of quality.
Cyting my current Project Manager:
„We are short on deadlines, dont care about quality, just deliver crap and we will fix it overtime (we will not /s)”
You live in some unrealistic world.
Companies have deadlines set together with clients. Deadlines are signed by the ppl who have no idea about how technically challenging it can be to build something.
You are hired to deliver ON TIME. Nobody gives a demn about quality when new client is banging on the doors.
My quarterly deliverables are defined by the product guy above me. His priorities are defined by the product org. Ultimately, the CTO/CEO are the ones that set and tell them to allocate stuff.
I can RAISE issues that need to be addressed to product, but generally speaking, dev doesn't have a seat at the table when it comes to prioritizing stuff. Instead, that's all driven by sales. Sure the company SAYS they do, but the practice isn't there. Dev requests are routinely ignored to the point that dev stops making them.
Perhaps this is just my org that's dysfunctional, doesn't feel that way though.
Your org is most definitely dysfunctional. If you meant to say your org doesn't feel too out of ordinary, then it's not wrong depending on what industry sector you're working in.
I'd summarize my comment again: if management doesn't let you do job properly you can suck it up, leave, or just fix things without asking anyone. But ask yourself, do you care that much about that company to not just move on?
It's what I tried to address in the second part of my comment.
A lot of teams don't have dedicated PMs in the first place, but for those who do get those jobs there's an extremely wide and well-distributed gamut of competences and skill sets, even more than developers I'd say. And you have to figure out whether you got a proper one, in which case it's the right thing to run such tasks through them when non-trivial, and then there's the rest. It depends on which market bubbles you've worked in to determine whether this latter group is an outlier or the norm.
dot dot dot Hello!
you do know the priority of upper management: they prioritize their own personal best interest, and as much as the board and CEO manage the incentive structure, this will be aligned with their fat bonuses.
so, dot dot dot. do the same: if something creates pain for you, dispose of it.
- I can work with the current architecture and ever inflating accruing debt (yet slowly inflating) feature times which ultimately is acceptable by PM. This will get me pay raises, bonuses, and praise.
- I can work on the annoying stuff which product does NOT want me working on, does decrease feature times, and does decrease debt. This will get me chastised, if I get any praise from my local team members, I don't get it from the people paying me.
Working around the system is punished. Working through the system is rewarded. I get my fat bonuses by making my PO/PM happy. I don't get fat bonuses making my life easier.
> Now, HTF do you convince a C-Level of this truth? Seems like there is never any time for any sort of housekeeping until the house is literally on fire and the whole world is looking at our smoke pillar.
I've had minor successes with advocating for a tik / tok approach, ie every couple iterations you have one iteration where new feature work is banned vs cleanup, bugs long ignored in the backlog, etc. I haven't found a simple way to explain this, but product centric executives do seem to see "ok, they get one iteration of what they want in trade for me getting what I want in the others" as a reasonable trade to keep the dev team happy.
It's probably the predictability and bounded costs that help.
I've been on both sides of this. I've been on projects with tech debt, but I've also had lots of experiences where new developers join a project and immediately, before even seeing the code at all, announce that they enjoy "making code clean" and "cleaning up tech debt" and where is the tech debt they can refactor away? Or they'll join, spend literally 20 minutes reading the first file they come across and then start pronouncing the decisions made by their new collegues 18 months ago as obviously "legacy" or "hacky".
That's usually a sign of trouble. The whole concept of tech debt or "cleanness" is very under-defined. One man's tech debt is another man's pragmatic solution, or even clean design.
The last company I worked at is basically being killed by this problem. The technical leadership is weak and agrees to what the engineers say too easily. Theoretically there's a product team but they aren't technical enough to argue with the devs. The moment the company started getting product/market fit and making good sales the devs announced the product - all of three years old - was riven with tech debt and would need a massive rewrite (it didn't, I worked on it and it was fine). Three years later their grand rewrite still didn't launch. Utterly pathetic. The correct solution would have been to crack down on that sort of thing and tell devs who wanted to rewrite the product that they were welcome to do that, somewhere else. Or, they could get back to adding features that would actually make users happier.
I think a lot of companies have had that experience - the sort of devs who are obsessed with "clean code" and "tech debt" often exhibit extremely poor judgement and can end up making things worse. Especially if the product has a naturally limited lifespan anyway due to e.g. pace of change in the industry, it can be fatal to spend too much time on meta-coding.
My general rule is that if its been working reliably and contributing to revenue generation its not bad code. We have code over 10 years old that looks bad by today's standards but since those are parts of the system that almost never need to be changed then there is absolutely no reason to try to make them better because they are already functional and battle tested. The concept of modern code is just a compulsion of developers to sprinkle their own beliefs of good coding standards over code that is application critical and should be resisted unless there is a good case behind it (e.g those components regularly cause instability or need to be extended)
My method is radical transparency. I simply keep refactoring notes which I turn into user stories on the backlog. I don't take the PM seat as to when to do these user stories, yet have built up the expectation that it's an ongoing process and recurring thing.
The transparency creates a rational set. Clarity about what is currently wrong and what the benefit of the improvement is. It's kind of hard to argue against rationalism. When you do this calmly and with precision, you instill a perpetual feeling of guilt every time the PM tries to ignore it. Seed planted. 1-0 for you.
It's all in the open, the entire team sees it. That's different from a forgettable backroom discussion. The PM looks somewhat neglectful to everybody. That's 2-0.
You need to have/build a reputation of being constructive. Thinking along with the PM instead of being stubborn. So you let go of refactoring for two sprints because there really is an important business deadline, and then you bring refactoring back to the agenda. This demonstrates you're reasonable and flexible, whilst if the PM is to continue to ignore refactoring, they will stand out as the unreasonable. 3-0.
Continued ignoring of technical debt will inevitably lead to disaster, affecting real world results. Who will get the blame? Not you, you have your bases covered and it's on record whom ignored all the warnings. 4-0.
These are all mind tricks to raise the cost for the PM of ignoring technical debt. Ideally it's not needed and you just agree to reserve 20% of your capacity for it, to end the discussion once and for all.
Of course, one may also encounter a job hopping sociopath PM that doesn't care about any of the above. In that case, just inflate your user stories and do what you can.
To conclude, in a truly healthy organization one would flip the question. PMs have a tendency to be feature fetishists. It's not even their fault as internal organizations richly reward whoever has the most ambitious feature agenda, whilst everybody thinks that the PM that "fixes the basics" is a slacker. The user of the product, in the meanwhile, doesn't seem to be a topic of concern.
Rather than requiring extreme evidence for a software engineer to do the basics of their job, ask a PM per feature what its justification is. Show me the business case.
They can't. I've been in this industry for 2 decades and the consistent experience is that every long term software project ultimately ends up in 80-90% of non-value or negative value, also known as "unrewarded complexity".
If I were PM, I'd spent 50% of each sprint on refactoring, simplification, and improving the core of the product that delivers business value, rather than building unrewarded complexity. For the other 50%, I'd take the team to the bar, which I consider a good use of time.
You'd find me a ridiculous PM, but I'm deadly serious that the product outcome would be better. You can't have technical debt when you don't do loans. Plus it's more fun.
This comment reads like a blanket statement but I feel like it has to be context specific. There are some cases where ignoring tech debt and grinding out code is the way to go. There are some cases where it’s not. Learning how to identify those is the hard part.
A better first approach would be to try and remove roadblock and bureaucracy that are encouraging developers to actually consider microservices. Those are of course not the only the only arguments for microservices, but I do think they're a big one that more often goes unsaid than the modularity. You can have modularity in a monolithic app, but for some reason many teams don't actually take modularity seriously; they ask that everything be object-oriented and call it a day, as if writing a `module` in Ruby means you've made things modular. Microservices become appealing when the process to have monolithic code reviewed and deployed is long and painful, since they are by definition small, decoupled, and simple to deploy. Of course it often doesn't quite work out that way, in which case a monolith would have worked anyway.
But it's unlikely things will change in coding culture so long as there's the perverse incentive for businesses to encourage their engineers to use hacks out of expediency. Hacks inevitably make things really hard down the road, create mysterious problems that are hard to solve, and the solution is often to add bureaucracy or switch to microservice architecture.
I've worked with a variety of monoliths. The biggest complaint I've had from companies who utilize that architecture is that I shouldn't have to run entire pre and post pipelines to make a minor change to an internal function that only affects one package in the entire repository.
This is solvable, but it is a problem - one that causes a lot of engineers to consider starting fresh in a new repo anyway. Multiple repositories inherently solve this for you - but bring about other problems.
In my opinion - most projects don't need microservice architecture in the sense of separated binaries. Typically, they can be written with clear boundaries within the monolith such that, should the time come when scaling concerns are real for certain applications, then they can be ripped out as needed into their own binaries.
The problem you describe is the inverse of the common testing problem you see with microservices (a dependent service changed without kicking off downstream tests against the new version). I'd take your version any day... It's easier for me to make tests faster and/or correctly identify which should run on a change if they're all in one repo as opposed to making network calls to each other across a service boundary. Also, clearly always better to run superfluous tests than to skip necessary tests.
> The biggest complaint I've had from companies who utilize that architecture is that I shouldn't have to run entire pre and post pipelines to make a minor change to an internal function that only affects one package in the entire repository.
Perhaps that sounds like the problem is not with the monolithic architecture but with the internal processes? If you "shouldn't have to" do something, then why do it?
Monolithic architectures default to this kind of methodology, though. You have to instrument tooling to identify what tests should be run when these files/services/etc get affected in order to optimize the developer flow in a monolith.
With several repositories you get this for free, with a trade-off of other difficulties.
A lot of teams use tools that can do this sort of thing OK out of the box. I've set up CI to only run affected tests and use incremental builds in the past. My current project has incremental CI.
The core problem is actually dev teams. Every time I've tried to implement incremental CI when I wasn't in a position to force it through, people fought against it, complained, moaned etc until higher level management gave in. The problems were:
1. People liked the feeling of every test passing every change. It made them feel safe and like they could point the finger at CI if something went wrong, because if every test passed it's definitely not their fault, right? But this leads to slow builds and then they complain about that instead. However this is preferable because slow builds are the project manager/TL's problem to solve, not theirs, so there's a built-in bias towards over-testing.
2. If there's any issue with the build system that causes one test run to affect another in some way (shared state of any kind), then incrementality can cause hard to understand flakes. Sometimes tests will just fail and it'll go away if you force a truly clean rebuild. Devs hate this because they've been taught that flaky tests are the worst possible evil.
3. Build systems are generally hard-wired for correctness, so if you make a change in a module that everything transitively depends on, they'll insist on retesting everything. This is reasonable from a theoretical perspective, but again, leads to very slow builds and people finding ways around them.
4. Work-shifting. Splitting stuff out into separate build systems (separate repos is not really the issue here), may appear to solve these issues, but of course in reality just hides them. Now if you change a core module and break things, you just won't know about it until much later when that other team updates to the new version of the module. But at that point you're working on another ticket and it's now their problem to deal with, not yours, so you successfully externalized the work of updating the use sites to someone else.
Competent firms like Google have found solutions to these problems by throwing hardware at it, and by ensuring nobody was able to create their own repositories or branches. If your company uses GitHub though, you're doomed. No way to enforce a monorepo/unified CI discipline in such a setup.
Out of curiosity, how long is a full build on that place?
I wonder what does it take to give up those points, as they are very good things to have. I can't imagine I would give up on them if builds took around an hour, maybe a day.
I don't quite recall. IIRC it was an hour maybe, so it wasn't like the builds were extremely slow, but it means people didn't want to run the tests locally, so if you're trying to get your work committed and then there's some failure CI picked up that you didn't each iteration can be an hour, and that made people feel unproductive and slow because some of them struggled with context switching back and forth and even the ones who could do it didn't like it. Which I totally agree with, context switching and small delays can be painful to feeling productive.
Obviously full test coverage is a very good thing to have. But, people hate waiting. They ended up having a proliferation of smaller repos and replacing it with manual rec/sync work which was IMO not a good idea, but probably felt more like "work" rather than waiting.
Interesting question arises here. Given most shops are now using interpreted languages, is it possible to make a change to say a .py file and ONLY deploy that .py file to production? i.e. do incremental releases
The issue was rather about the fact that changing that single .py file can have impact on a service A and service B which depend on service C where the file was changed.
In microservice world, you have to run e2e tests because thats the only place where you can really guarantee the system will work as you expect.
Pacts wont solve that, integration test - the same, units w/e.
Running a monolith test is overall faster than spinning 20 microservices, beinging every db to specific atate and running tests.
I don't know if "most shops are now using interpreted languages" is an accurate statement :) (I actually don't know).
But I think that's definitely an interesting idea (likely with some small levels of extra caution with some specific high profile files, like an API definition/endpoint or something).
When the microservice craze took off a lot of people asked me for opinions on it, and said the same thing then I say now: it more or less boils down to Conway's law. If you have multiple teams that need to iterate and deploy independently the overhead may be worth it. But if you're just a small startup with a single unitary dev team, and that generally just deploys a new version of any changed services all as one lump, it's an insane amount of complexity and overhead.
Agreed. The kicker is knowing what those boundaries are and avoiding them. I really enjoy working with microservices for async tasks, and I think they're really well-suited to them. I really dislike inter-service dependencies. And of course, there's nothing wrong with a few, small monolith services. It's all about the right tool for the right job for me, and being too prescriptive to a particular architecture comes at a high cost.
It kind of feels like Sid is lying through his teeth here, as a person who deploys and maintains a private Gitlab installation, along with a whole host of other core platform services for internal use. Gitlab is by far the most modular off-the-shelf product I've encountered outside of JFrog's Xray. Look at their official Helm chart: https://gitlab.com/gitlab-org/charts/gitlab. Gitlab itself consists of 14 sub-charts and it also bundles 4 third-party sub-charts for object storage, a web proxy and ingress controller, certificate management, and the internal container registry. Gitlab without the third parties I believe consists of 15 distinct containers.
I don't think it matches what most people think of when they hear "monolith." It is absolutely not a single process only communicating between components via function calls. Many of the Gitlab core services, such as Gitaly, are written in Go, as well, not Ruby, though they also have "gitaly-ruby" as a testing service that can be used by developers not comfortable with Go.
I've seen several times where the overhead of the network request is dramatically higher than the work being preformed.
The worst example was a validation microservice that did things like "is True" or "is greater than 5" one element at a time on a large object. So you'd have an object with 50 elements and it would make 50 requests to the service. This was in the context of a batch processing job that handled millions of items, so billions of requests end up being created.
I tried to explain that each network request probably does a few thousand things like "if request_type == 'get'" during it's lifecycle on both sides of the transaction, but nobody got it and I quit soon after.
Resume driven development is one of the most depressing things I keep running into in my career. Wanting to learn and grow is great, but forcing customers to suffer the repercussions of your learning because technology/design pattern x is in vogue is closing in on malpractice.
Next time you interview with someone, ask why they converet a system to Microservices at their last job? If they don't have an actual reason, bin them.
A lot of people seem to underestimate how fast computers are; a lot of times you don't need to scale horizontally, even at "webscale". And even when you do, it's not like you need microservices for that; sometimes you just run a new instance of $app, or spin off one thing to a new service (while similar to "microservices", it's not really the same thing as the bulk remains monolithic).
Right. Then HTTP requests being share-nothing they horizontally scale in a single instance, too. Just spin more cores.
This microservices craze has been mind-boggling. As for many other fashions, it’s excruciatingly difficult to make young or even average senior devs understand it’s bullshit at 90%.
When Netflix does it and the self-appointed expert bloggers praise it, as a CTO or team lead you’re pretty much screwed.
Why, SOA is a reasonable concept, and does help scaling. Say, billing, ETL and batch processing, and Web backends can live as separate services all right.
Going with micro-services is another story, and it makes sense in a more narrow gamut of circumstances. Say, microservices may be a great fit for AWS Lambda-style deployments, but this assumes spiky, sparse load patterns.
You can deploy your whole monolith on one single lambda and it will scale the same. It may even help to keep it hot.
Services, micro or not are only really useful when load factors are vastly different and you’re not serverless. Or when you have sufficiently many large teams and a sufficiently large codebase that a single deploy becomes unmanageable because too much communication / coordination is needed.
This probably is the reason why it works in some settings. Anecdotally, as someone with more domain/functional knowledge and operations knowledge than knowledge of frameworks, I found microservices architecture with good functional test coverage a better way to deal with a team of programmers like me (basically, a typical team in an offshore IT consultancy building enterprise applications using SpringBoot and Node.JS). Ship the service as early as possible with available talent and then get someone really good with that programming language or framework to deal with performance bottlenecks within the microservice. I see it basically as a way to limit the blast radius of the applications. Of course, as you said, get the boundaries wrong and you have a bigger problem.
It's worse, because now you can't fix your bad boundaries, because there are different teams working on each side, and different projects under different PMs.
Anyone find it suspicious how many companies put so much effort into justifying how Rails is still a good tech for a mature company?
I think the truth is really something like - we prototyped the app in Rails, because that's what Rails is good at, now we've evolved into a huge and slow application with no type validation, but porting the application and switching the team to something better and faster would be a nightmare, so we're just going to write an article about feeling good about where we're at.
Counterpoint: I find it suspicious that so many people on this forum take the opportunity to bag on Ruby on Rails every time it comes up. It's almost like they hate the fact that it is, in fact, a perfectly valid choice of language and platform, and they're trying to justify the self-flagellation required to use Java for web apps. I've been around the block a couple of times now, and done projects in PHP, ASP, .NET, and Java. But I've been using Rails whenever possible to make LOB CRUD apps for almost 15 years now, and there's nothing that can even hold a candle to it for productivity. Still. Things that can take just a line or two in Rails often take HUNDREDS in Java & <insert popular JS framework>, and be a nightmare to keep square with strong typing that's so popular with Typescript now. So go ahead, cast aspersions on Rails. I'll be over here, happily cranking out features as fast as my users request them, while the corporate IT departments at my Fortune 250 take 6 months to even put a request on their schedule.
EDIT: As an example, I once rewrote a web app for a department. The original was all Java/Struts. It took a team of outsourced developers 2.5 years to write. It took me 4 months to rewrite it in Rails (having no access to the code), and mine did the main operation twice as fast, which meant that you could do it online, instead of having to wait for a backgrounded process to finish and send email.
Rails is incredibly productive for 1-3 people to write something from scratch.
As soon as everyone committing code isn't synced up with the same knowledge and solution styles, it goes to hell fast. It's bad at surviving team transitions intact, bad for onboarding once the project is past its earliest days, et c.
Now, see, I would argue EXACTLY the opposite. I worked (briefly) for a Rails "house." I was assigned to a project that someone who had left the company had been working on, and told to add payment processing. I started working through the idea, putting stuff in the places it should go throughout the stack. About halfway through the process, I finally noticed some code that looked almost exactly like my function, "below the fold" of the text editor window. Then I started noticing that I had duplicated the little snippets of code needed in many files. They looked almost the same, and they were in all the same files. In fact, I realized that ALL the code to enable the processing was already done, and all I had to do was expose it on the page. To me, it was a crystal clear example that each bit of code needed to do something in Rails has a "correct" spot, and it's hard for me to believe that anyone with a nominal understanding of Rails wouldn't grok the organization, and naturally do things "the Rails way."
My experience doing agency work with Rails has been similar. Love or hate the opinionated nature of it, switching projects was fast because generally you'd just know where any given thing's home would be.
Some languages and ecosystems can be more helpful, though, even if none can save you. Rails is memorization-heavy and doesn't have static types to aid in navigation & reading. Its runtime auto-magic even resists grepping. Auto-imports mean you don't even get a decent list of which sources are contributing to a given file's behavior.
It's also the case that different teams can write it pretty differently, depending on gem choices and which Rails features they lean on. Plenty of other languages and ecosystems are like that, too, but all the above stuff means Rails is exceptionally bad, in that regard.
I totally agree that Rails is great at LOB CRUD apps. Apps like that don't get very large or complex, mostly do fairly simple transformations from a database, mostly have fairly simple validation and business logic, and can use a fairly small set of user interface components. I think the primary competitor to Rails in this space should be low/no-code frameworks that can do those things without requiring much or any code. I agree Java/Struts is worse for this, and that there is a huuuge amount of "dark matter" of apps like this out there to be written.
But! Gitlab is not a LOB CRUD app. And I think a lot of people here have been burned by ending up working on bigger more complicated projects that started in Rails because it was convenient and then persisted with it because it never became clear that there was a positive ROI in migrating to something different. Some people who have had that experience tend to be skeptical of projects getting started with Rails, because it feels like starting to dig a hole that they know may be difficult to climb out of.
But! You never really know whether some application is going to remain small and simple, or whether it is going to grow and become more complex. So everyone is just guessing based on very limited information about the future.
My two cents is that I like services. Not micro-services, larger services with well-considered boundaries, but which are not a single monolith. If that structure or philosophy is in place, I worry a lot less about implementing the individual services with Rails, because the future path is much more clear. If a particular service grows too big for Rails' britches, it can be replaced with a different implementation, without reimplementing everything. I think services are an extremely useful hedge against the risk of path dependency leading to implementations that are not working well, but are very hard to fix.
checkout elixir/phoenix. Its almost as productive as rails while having performance within the same realm as go.
pros: imutable data types. ruby like syntax. similar codebase organization. super fast. more modular than rails. the websocket system is way faster to develop in and deploy.
cons: no runtime metaprogramming but you do have a sophisticated macro system. not as many drop in libraries. not as many jobs for it yet.
Unfortunately Elixir just doesn't have the network effects of Ruby. Ruby support (libraries, SDKs, first class integrations) and popularity (ability to hire effective devs) is just an order of magnitude or more higher than Elixir.
I say this as a massive Elixir and functional programming fanboy.
network effects take time to develop. unfortunately it doesn't have a major company or foundation pushing it forward aggressively. That said, I'm hopeful. its defiantly the BEST platform for building anything websocket based. the killer apps for picking elixir will be anything requiring soft realtime.
case in point, I built my startup in elixir. completely agree on the difficulty in hiring but its not hard to teach. It was harder for me to find people who knew sql well. Unlike active record, ecto embraces sql so you have to know sql to use it.
anyways, its time will come. more and more companies are adopting it from discord to supabase to <cringe> trump media group.</cringe>
I've heard this time and time again but I just don't buy it. The only similarity between Ruby and Elixir is syntax and even there I'd say it's a superficial likeness. Beyond that Elixir and Ruby are almost diametrically opposite. Elixir is functional/immutable whereas Ruby is mutable/OOP. Languages such as Elixir and Clojure were designed to counter the OOP trend and its mindless disregard for mutation. So, anyone who is really into The Ruby Way is going to have to disrobe before entering the temple of Elixir.
Yes the syntax, but also many of the libraries and conventions are the same. The package management/build tool has a lot of similarity as well. When learning a new language, learning the standard library and the many conventions is a lot of the friction. With Ruby -> Elixir that is a lot less.
Also, I think you underestimate how many people (such as myself) who use Ruby in a very functional way already. Immutability is really not all that different. The only thing is you can't use ! methods and instead have to assign. For example instead of `array.sort!()` it's `array = array.sort()`. It's really not that hard, and in fact I found it easier because I didn't have to remember if `array.concat()` mutated or not.
IMHO the only "hard" things for a rubyist to learn are pattern matching and Actor model (OTP) concepts. Actor model concurrency is definitely different. But for an experienced programmer, that can be learned in a couple of hours (or less).
Cons: missing community, a lot less well maintained libraries, needs more wheel reinvention, impossible to hire experienced developers, far worse tooling and editor support.
The community was there, at least that was my feeling when I started learning Python around 2003. What wasn't there was of course almost anything web-related, for my first "website" in Python I had to resort to using mod_python, which was a perfectly fine project in itself but which wasn't giving you the same instant gratification as PHP did when it came to the web. (there was also the Zope ecosystem, but that's a subject in itself).
Elixir performance similar to Go? That's stretching it a bit. Elixir is fast within its own niche - network I/O - but it's a dynamic language so can't compete with Go on, say, heavy calculations or other CPU-intensive tasks. Elixir doesn't even have a vector data structure, relying instead on Erlang's very clunky offering.
yes I agree its not as great at cpu intensive stuff. but most people are using go in networked applications where network I/O IS the bottleneck
that said, elixir has excellent interrop with rust via rustler for when you need those heavy caclulations. and I trust a rust implementation long before I'll trust one in go.
and as far as lacking a vector data structure, sure but there's been a lot of work into Nx. who cares about a vector data structure when you have a full power numerical processing library complete with tensors! and unlike python, you can combine it with genstage and broadway to do all kinds of crazy shit.
That's like asking why there is a car doing 60mph on the highway when everyone else is doing 75mph. If they are driving a VW, would you assume that means VW makes slow cars?
> That's like asking why there is a car doing 60mph on the highway when everyone else is doing 75mph.
thankfully in my country being too slow on the highway (lower limit being 80 km/h apparently) will net you a fine. It's people's right to drive whatever van they fancy... as long as this does not have any too much negative influence on society.
Okay, come on! That is fking Struts, it is hardly a fair comparison. Though otherwise I agree that Rails is a great choice for almost every CRUD app, like we really should stop overstressing it. Just use whatever one we are comfortable with and has enough internal knowledge on and with modern computers it will be plenty fast enough.
This whole thing happened about 4 years ago now, and the original application was written several years before that, before the rise of React and Angular, which have BECOME popular (IMNTBHO) because they try to address the gaping hole which is the frontend of using Java for web apps.
Around that same timeframe, I tried using JHipster, which sort of attempts to do for Java everything that a Rails app stack does for Ruby. It took 45 minutes for my top-of-the-line Dell laptop/workstation to bootstrap it, and it was... unapproachable.
Why did you use Struts 4 years ago? That framework died at least a decade ago.
But don’t get me wrong, I see where you are coming from, I’m just saying that your experience may not have been up-to-date even then, let alone now. In practice, with spring boot, quarkus and the like developing a backend app is plenty productive in java.
Ay yi yi. Maybe go back and reread what I wrote. I didn't use Struts, and the original app was written at a time when that was an accepted "meta."
I've used Spring Boot. We'll have to "agree to disagree" that it's "productive."
All these things that people are stacking on top of Java are just trying to implement concepts and features that Rails has had for 15 years, and they're still nowhere near as "productive."
Interesting! I'm wondering since you have programmed in PHP, what has been your experience with that? I assume you would be using web frameworks like Laravel or Symfony or equivalent. Does Rails still beat those like it did Java and <Foo>JS?
Laravel is pretty amazing, and with modern PHP the languages are pretty comparable. PHP still has a reputation problem though, and hiring has been tough because so many PHP devs haven't used Laravel (or Rails) so they have a bit of a steep learning curve. The tooling for ruby/rails is also still a bit better than laravel, but that may not be true forever.
Do you think it's fair to compare something written in Struts to a modern Java framework like Spring Boot or Quarkus? I'd bet that it would have taken a similar time to rewrite in one of those frameworks as it took to do your rewrite.
As LesZedCB said, LOB is "Line of Business". In practice, this is where your business-logic resides.
edit: CRUD means Create-Read-Update-Delete. It's a basic programmer term for a simple application, or those simple parts of it at least. Even something big like Wikipedia, at it's core is a CRUD site. Most things are, with business logic sprinkled in; but some things aren't CRUD at all (eg a video game).
For example, if you're a car dealership your LOB CRUD API would handle stuff like submitting new cars, or staffs, or associating a sale of a specific car with a specific salesman or whatever other biz logic you have. This could be in contrast with, say, your Document Storage CRUD API which handles integrations with your document storage (eg storing things on a SharePoint, or Ceph, or S3, etc).
LOB has no inherent value with regards to microservice vs. monolith, it's simply another way to refer to the business specific logic. The term LOB is common across business as a whole, whereas "business logic" is generally a phrase used by programmers about programs.
By "LOB CRUD" GP means those fairly generic apps that are mainly a front-end to some database, with a bit of business logic or integrations tacked on. They're not the apps making money for a business, they're just helping it tick along and do the things that actually bring in revenue.
Companies justify using Rails publicly because it makes financial sense.
There's this meme that Rails "just doesn't scale" and is "yesterday's software", but neither are necessarily true. Ruby isn't the fastest language, but it's now competitive enough that the same critics might as well throw Python under the bus while they're at it. There's things I don't like about both Ruby and Rails, but it's a perfectly viable and productive way to build websites. A very scant number of businesses need to be at the scale of Twitter or The Google, in which case Rails can be a great choice.
But I imagine Rails is being increasingly dismissed by newbie developers as being obsolete and not-cool. Mature companies may realize it's in their best interest to communicate that their boring-tech works just fine and that they have no plan on changing to cool-tech. If it's true, then someone's got to say it, right? Someone needs to be interested in working with Rails. Not every business has the money to waste converting their old apps to cool-tech like Elixir or a React frontend with microservices.
> Ruby isn't the fastest language, but it's now competitive enough that the same critics might as well throw Python under the bus while they're at it.
You say that like it's a point in support of Ruby, but, yes, absolutely, Python should be thrown under that same bus. Both languages are dramatically slower than other languages that are available. My little hobby programming language with a bytecode VM I wrote myself in a single C file is faster than Ruby and Python. It would be hard to design a language that isn't easy to make faster than Ruby and Python.
They both pay an incredible runtime performance cost for their deep support for dynamic runtime metaprogramming and mutability. If you don't want those features, I don't think it makes sense to use those languages.
I would argue the eventual "bogged down" feel of a Rails application isn't ruby itself. It's rather the abuse the database takes because ActiveRecord allows you to lazy load records so easily by abstracting SQL calls, so you're hitting the database many times per request and not realizing it until you get bitten. There is an option to disable lazy loading at a project level, but I've run into gems that assume it's enabled.
For comparison, elixir's Ecto doesn't allow lazy loading
In addition, for things that can't be done easily with ActiveRecord, you can write your own SQL, but then you really need to know SQL to get the performance right.
Where I work, our main product is a Rails monolith, over 10 years old at this point. The few serious performance issues we currently have are due to complicated SQL queries written back when the DB was much smaller.
I guess part Hyrum’s law in that big projects depend on implementation details to a degree, related but many C extensions (though Graal can run these as well, but perhaps not all of them), and last I read about it, the JIT compiler is not yet that good at optimizing very big applications — it takes a really long time for some functions to become hot enough.
I think it's just that the very largest codebases are the most likely to end up depending on obscure bugs in Ruby or the surrounding ecosystem itself, or maybe features that basically nothing ever uses except that one file buried deep in that one gem that three real programs use. Ruby apparently has a lot of stuff like that.
> But I imagine Rails is being increasingly dismissed by newbie developers as being obsolete and not-cool. Mature companies may realize it's in their best interest to communicate that their boring-tech works just fine and that they have no plan on changing to cool-tech. If it's true, then someone's got to say it, right?
That's precisely the problem. If engineers were a dime a dozen, then Rails could be a great choice. Boring tech is a great place for businesses to be, in theory. Back here in reality, finding engineers is pretty hard, and the ones who we can find, are all only interested in working with Cool New Shiny Toys. If we posted a Ruby ad, we wouldn't get any bites. It's not used here.
Boring tech is great, until it becomes a noose to hang yourself with.
We've not had a big problem getting candidate streams of people who want to work with Ruby. In a way I don't mind having an implicit filter against people who only want to work with shiny new things, as we don't really want our platform filled with everyone's favorite shiny technologies any more than we need a shiny language to write a good-sized e-commerce web app.
> That's precisely the problem. If engineers were a dime a dozen, then Rails could be a great choice.
If engineers are hard to find then isn't using a framework that maximises developer productivity the way to go? Rails definitely falls into that category.
Related, we recently advertised for Rails and React engineers. We had about the same number of applicants for each position but the Rails engineer quality was in general higher than the React engineer quality.
> If engineers are hard to find then isn't using a framework that maximises developer productivity the way to go? Rails definitely falls into that category.
Languages are not like cars. You cannot take a professional Rust engineer and drop him into a Ruby environment and expect him to be immediately highly productive, as if you were switching him from a Miata to a Ferrari. Languages need to be learned all over again, their ecosystem needs to be learned, beginner code needs to be reviewed by someone senior who is more familiar with the language. You can't just make an executive decision to go with Ruby because it has a reputation for productivity and expect immediate productivity gains.
> If engineers were a dime a dozen, then Rails could be a great choice.
Are they not, though? The narrative is that there's simultaneously too many engineers coming out of school and yet it's extraordinarily difficult to hire engineers. Which is it? Or are most engineers really that bad that they're unhirable?
> Back here in reality, finding engineers is pretty hard, and the ones who we can find, are all only interested in working with Cool New Shiny Toys.
I definitely believe you, but I wonder how much of this is self-fulfilling. Maybe a lot of engineers get into cool-tech because more jobs are demanding it, and their perception is they'll be quickly obsolete if they work with boring-tech. Most newbs seem to look towards startups first, which are going to be using cool-tech a lot of the time, but there seems to actually be a lot of interest on HN in working with boring-tech at either nontech companies or BigCo.
> The narrative is that there's simultaneously too many engineers coming out of school and yet it's extraordinarily difficult to hire engineers. Which is it?
It's both. You need to be a large company to hire junior talent. The local market is mostly startups without well-developed production guardrails and well-gardened, too long backlogs. The large companies cherry-pick for the relatively few junior positions that open and they aren't using Ruby internally. Thus the Ruby ecosystem hasn't developed here.
> but there seems to actually be a lot of interest on HN in working with boring-tech at either nontech companies or BigCo.
HN trends senior talent which has been around the bloc enough times to know that stacks change and people don't. Most of the labor market comes into interviews and want to impress you with how many languages they know.
It's not hard to hire an engineer that can take a ruby on rails project and run with it. There's mountains of tutorials and books about it. You don't need the best of the best for this.
Some people have decided that that isn't good enough for their company though.
No language book can genuinely teach you the ecosystem for that language. The ecosystem is many times larger than could ever possibly fit into one book, but the ecosystem is what provides you with the libraries you actually reach for to build things day-to-day.
Expecting engineers to self-teach from a book is practically begging for NIH syndrome and a rewrite in two years.
I bet you wouldn’t hire me, even though I am a very competent developer with a long resume. The problem is narrow minded hiring. I have never had problem hiring good people.
> Not every business has the money to waste converting their old apps to cool-tech like Elixir or a React frontend with microservices.
I always ask myself (and the team at large) "how much time/effort will it take to reach parity using the new tool-set?". The answer usually settles the debate immediately.
In my experience, scaling RoR is totally doable, but very tricky, as the both language and associated frameworks are very terse, opaque and by default seem to do the thing that does _not_ scale well. Because of all this magic, you have to be somewhat of an expert to avoid all the foot guns.
As a new shop, without a known depth of knowledge and talent, I'd probably avoid Rails if scaling is a concern, but if I'm Gitlab or Shopify or some place with world class Ruby engineers, I'm staying the course.
> Someone needs to be interested in working with Rails.
A lot of us are. I don't think Gitlab is in any shortage of resumes...I tried. Might be the market collapse or whatever reason, didn't even get past the initial HR interview. It's also possible my country of residence isn't high on their list...we are expensive.
> But I imagine Rails is being increasingly dismissed by newbie developers as being obsolete and not-cool. Mature companies may realize it's in their best interest to communicate that their boring-tech works just fine and that they have no plan on changing to cool-tech
I dont think that's the case. I think Ruby is dismissed because it's not a boring language, but has none of the safety of the other "shinier" languages.
I've worked on some huge rails apps, and I don't agree at all that performance is poor at scale. You might need an extra instance or two to handle the same traffic load, but in every codebase I've seen, the problems of slowness are related to database access, slow/unoptimized calls to other services/APIs, etc.
I actually disagree that it's suspicious. I think there are often trends in programming that despite being touted as newer, better, and faster have failed to deliver on that. Ala the amazing amount of churn in JS frameworks in the earlier years of SPA's and such. An article saying y'know what "X is actually good enough" isn't any more suspicious than all the articles saying "Y is the new hot thing".
I am biased here because with respect to the products that I've helped create the speed of ruby vs js, python, etc. has never been a dealbreaker.
Even if porting an entire application wasn't a nightmare, why would you ever do it unless your organization was going to fail without? Porting entire applications needs to be justified by some critical requirement that (in this case) ruby/rails isn't meeting.
Because Rails is a good technology for a lot of use cases.
If you want to develop a simple crud style app, with user login, mail sending and all that stuff it is highly productive.
Performance is okay and certainly good enough for the average web project.
Just get rid of the weird parts like turbolinks (unnecessary and messes up a lot of js) and don't write APIs using it (can't even rename fields easily).
There are similar Frameworks in other languages and they all have pros and cons.
Imho when it comes to web frameworks there is way too much emphasis on the startup style company and the ones with huge traffic.
Companies doing grunt work style web development don't waste their time on flavor-of-the-month.js.
If you build a website for a niche company to sell odd machinery on, you have to get shit done and you will never need to scale.
That means reliable, batteries included, highly standardized and easy to use. Rails ticks those boxes really well.
> Imho when it comes to web frameworks there is way too much emphasis on the startup style company and the ones with huge traffic.
That's exactly what is making web development decreasingly enjoyable as the years go on.
~99.5% of the web doesn't need the hyperscalable cool-tech du jour one typically reads about on HN. As long as developers don't fall for footguns, there's a lot that can be accomplished even with a language runtime like MRI. If scale becomes an issue, those things can be solved through horizontal scaling, optimizing database queries, not doing stupid shit that's memory-hogging, and moving expensive algorithms into another language. Ruby is perfectly adequate for serving HTTP requests. Whenever I've worked on a Rails app that everyone was frustrated with, the problems were almost always a compound of a bunch of dumbass shit that various developers piled on without much thought (otherwise I'd have seen them discussed in GitHub or Pivotal stories).
In another comment, it said Gitlab moved off ActiveRecord.
The fundamental problem with rspec + ActiveRecord is for each test: you create db state (tons of inserts), run the test (more db reads and writes), and then tear down the state. This is very expensive when you have 100,000s of tests each taking 500ms.
Rails/rspec does not make it easy to stub db state with ActiveRecord.
Does this mean if you only want to run the tests for `users_controller` during development, you still have to load the fixtures for all other tests with each run?
In the normal case. When you start a test it loads from fixtures into its own database.
Any test can make use of what is in that database. So you tend to put a bunch of stuff that is useful for most tests.
If a specific test needs an unusual setup, then it can do whatever is needed to load data. Generate data, load extra fixtures, etc. You can do this on a per test basis, or a group of tests, etc.
An option is to reload everything between each test. Not recommended.
Normal usage is to rollback any changes a test did, so that each test has a clean slate. This tends to be plenty fast. Parallelism can usually be done.
If speed is still a concern, you can get fancy and load up multiple databases with fixtures and divide tests between them. Maybe have 2-8 sets of tests running at same time.
It's been a few years, but I've inherited test suites that take 14 hours and been able to get them down to a few minutes with above techniques.
Something like Guard can be setup to automatically run tests relevant to what you're working on. So you don't have to run full suite each time.
> Every tiny rails app that uses rspec ends up with +1hr test times due to tight coupling between activemodel and the db connections.
Wat? I mean test times aren't great but I've got several thousand specs for what I would call a medium-sized application that run in <1m using TurboTests or <3m using rspec alone. I can't imagine a "tiny" rails app that takes an hour to test. Maybe you're doing a lot of browser rendering or something? My app is API only.
I think the fundamental design problem for rspec is for each test you create db state (tons of writes), run the test (reads and writes), and then tear-down (truncate). Each test is at least adding an authorized user (and then removing it) at the simplest and at the worst creating complicated db state to support relational models that must be added and torn down with each test (database_cleaner).
Gems like Fabricator or FactoryBot also make the "create db state" even more excessive, because developers would be lazy and use factory methods that create more than they actually need for the test.
For example, one project I stored a tree-structure that could be modified by api calls. Each test required re-creating the 15 node tree + each node's relational data (think node = user and all users must have a profile, authorization scheme, etc.).
I never figured out how to avoid resetting the db state for each test case.
This isn't an rspec problem, it's a Factories-for-ActiveRecord problem.
The two main options to mitigate are:
- just work with in-memory objects (FactoryBot using build or build_stubbed, or just MyModel.new) and stub/mock finders as needed,
- or there's this thing people like to hate that was introduced in Rails 1 called test fixtures, which allow you to prepopulate the database with some initial data and then your test just runs in a transaction and rolls back.
The best suggestion I've heard recently (though I can't recall where to give credit) for managing complexity w/ fixtures is to write an extremely minimal fixture set aiming for 2 instances per model with minimal data required to be valid, and then customize it as part of your test. For a blog example, that means you have two Posts in your fixtures and would write a test around drafts by updating one of them to draft status (be it state, toggle, or publication date) and then test your query, or controller, or whatever. This keeps the data churn inside each test minimal, while keeping the out-of-scope data minimal as well -- if there's only 2 blog posts, all your tests can assume there will be one you care about and one you don't, which is useful for testing filtering/search/visibility/authorization/etc while remaining pretty consistent.
> - just work with in-memory objects (FactoryBot using build or build_stubbed, or just MyModel.new) and stub/mock finders as needed,
Mocking the finders is obnoxious, since ActiveRecord returns a relation class, not an array. Plus stubbing out relational coupling isn't easy (e.g. the user model might have a `company_name` method that delegates to the company table.).
> - or there's this thing people like to hate that was introduced in Rails 1 called test fixtures, which allow you to prepopulate the database with some initial data and then your test just runs in a transaction and rolls back.
Fixtures and Factories have the same issue are vulnerable to inserting unnecessary data for the test. Models may have validation requirements that must be satisfied to be stored, but completely unnecessary for the test. For example, all users need a `company_id` with a foreign key constraint, so to test _anything_ on user, you have to insert a valid company as well.
Maybe I need to re-evaluate fixtures though, since they would be simpler to run than a factory.
The idea is that you write one set of fixtures for the whole app that you then use in all tests. You'd have a valid company and a valid user in your fixtures, but that's fine because you probably want to test both the company and user models. In the user model tests, you can ignore the existence of the company fixtures, and vice versa.
Since the inserts only happen once for the whole test suite, the marginal cost of adding more fixtures is minimal, so it makes sense to just make the fixtures as complete as possible.
interesting. I haven't used fixtures before. Wouldn't this make individual tests slow (like in development?) since all fixtures must be inserted all the time?
It doesn't add as much overhead as you might expect because the fixture data is inserted into the database without instantiating any ActiveRecord models. Unless you're loading a truly crazy amount of fixtures (gigabytes?), the database can ingest all the data in single-digit milliseconds.
In addition to that, the test suite performance was regularly monitored. And like any performance issue, we'd instrument and then fix issues as they came up.
(duplicating another comment I wrote to hear your thoughts)
The fundamental design problem for rspec is for each test you create db state (tons of writes), run the test (reads and writes), and then tear-down (truncate). Each test is at least adding an authorized user (and then removing it) at the simplest and at the worst creating complicated db state to support relational models that must be added and torn down with each test (database_cleaner).
Gems like Fabricator or FactoryBot also make the "create db state" even more excessive, because developers would be lazy and use factory methods that create more than they actually need for the test.
I don't exactly agree that it's a design problem. I think there is a lot of value in testing at a high level (controller tests) and having them touch the full stack (down to the DB). Gives a lot of confidence that the code will work in production. Throwing money at it (running the tests in parallel) pretty much solves it.
I have seen FactoryBot use get out of control (creating 8x as many records as you'd expect). That's a really easy way to slow down a test suite :). One way I've found to fix that is by adding tests for the factories, and asserting they are only creating what you want them to.
On model tests, another thing GitHub did well was encourage using `.build` when possible to avoid writing the the DB.
> I think there is a lot of value in testing at a high level (controller tests) and having them touch the full stack (down to the DB).
There is a lot of wasted work. Each (similar) test would insert and then delete the same rows, but make a little tweak.
Other languages and frameworks (java, python or golang) don't need to test the db and run super quick. Most db systems do not actually need to be tested. e2e testing can be done manually via curl or qa.
> Throwing money at it (running the tests in parallel) pretty much solves it.
Tests still are slow locally.
> One way I've found to fix that is by adding tests for the factories
That is interesting. I had a meta-test that would limit the number of sql queries a test could trigger.
> Anyone find it suspicious how many companies put so much effort into justifying how Rails is still a good tech for a mature company?
No. I would imagine these articles are coming out as a response to people claiming Ruby is outdated tech when it's still perfectly viable. And I say this as someone who has happily moved on from it.
And I would hardly consider a blog post or two "a ton of effort".
Elixir. And it wasn't "because Elixir looks like Ruby"--in fact, I personally found having it look so much like Ruby detrimental at first since the similarities end at syntax. I had a growing interest in functional programming and in improving a multi-user web app we were building. Elixir's (really Erlang's) concurrency model is the first I've ever been comfortable with and able to build a clear mental picture of what's happening. The syntax (for creating concurrent processes) is a wee bit gnarly, of course. There's no perfect world :)
As it stands, if I were to ever want to work in OO again, I would go back to Ruby. It's still my local-scripting language of choice for things that are too annoying to do in bash.
OK. Elixir is a no go for me, there's zero jobs where I live and I don't see this ever changing.
It leaves Python, it was the second most enjoyable (to me) after Ruby, and Django is fine. Node is simply too hectic with new frameworks coming in every week and Go...oh boy.
Hey I understand how I sound, I'm privileged to be doing this job and get paid what I'm getting paid that it's a bit ridiculous to be stressing over stacks. Some people's jobs are to fight for people's lives in Emergencies Rooms or keep public order in dangerous neighborhoods and here I am not pleased that I might have to switch from Ruby to Go lol.
> Where do you live?
Sorry for the paranoia but ever since I'v found out you cannot delete accounts from HackerNews I tend not to reveal too much personal info. But lets just say that on quite a large radius there aren't any Elixir jobs available to me and Ruby is drying up fast as well. And I don't think 100% remote is something I'd be happy in long term.
I complain about work all the time, I didn't think you sounded like anything :D
> Sorry for the paranoia
No worries--I thought of that after hitting submit. And ya, there are some jobs where I live but I work 100% remote in order to be at a company that uses Elixir and pays me far more than my own country would. Although I'm super happy with 100% remote. I've already blown off one of 2 company meet-ups this year, lol (though really that was more about having to travel).
> so we're just going to write an article about feeling good about where we're at.
I've been doing rails for years now, I have yet to work for a rails shop that is living in denial about the state their app. The batteries-included nature of rails is easily worth the cost of no type validation in most companies I've worked for, and the teams I've worked with that lean into JS more tend to be much, much bigger with a lower developer throughput.
IMO it's a much more sane world to be in than JS and React where best practices change every year or two but the company repos can't keep up. So you wind up digging for solutions to problems that React no longer considers to be a best practice and is therefore buried underneath a pile of Medium articles discussing the new right way to do things.
This isn't about truth-telling, it's about impressing enterprise prospects with the _relative_ and self-described simplicity of a basket they are considering putting some/most/all of their apples in.
Given Gitlab's fairly transparent chronicling of what they've learned and how they work, I disagree. One of the things I like best about Gitlab (as an organization) is their ongoing effort to conduct business and process learnings in public, and share them.
I don't get the case that Rails is not working out well for them which is implied in your answer. What is the "better and faster" tech here that would have made a big impact on their business?
I mean, you have to justify being stuck somehow. While GitLab is certainly not an average Rails project technical debt-wise (I'm a tiny-time contributor to GitLab and my experience was quite amazing), but it's still Rails & Ruby all the way down there which could get you going only so much. You still have to write shitloads of tests, most of which are doing a sum-types-able programming language compiler's job over and over again. One day you realize this and that's the day Ruby is dead for you because there's no chance in hell you can persuade your peers to use measly dry-rb (dry-rb maintainers — no offense, I'm admiring your stellar work), not to mention moving to F#/Ocaml/Haskell/Rust.
At least in Gitlab's case, I think this is spot on.
My biggest complaint about it is that it's slow, and as of the last time I used it so many of its functionality didn't auto-update until you hit refresh.
I don't know enough about web development to understand whether this was a Ruby on Rails problem or a Gitlab problem.
Rails has a lot of "magic". Which is fine if you only hire veteran Rails developers who know the magic, but over time it becomes organizationally exhausting because ultimately you will have to train people, you will have people leave the org, and you'll have people coming from other orgs with way less conceptual overhead that keep asking "why don't we just..." to methodologies outside the core rails ecosystem.
Rails is perfectly fine but being perfectly fine is an unsatisfactory outcome over time.
Also problematic is that gitlab really really cares what programming languages you've worked in before. I've applied and they've turned me down because I didn't have enough Ruby experience, nevermind that I have plenty of years of experience in development and learning another language isn't that big of a deal.
Ruby has had it's heyday. It's 15 minutes of fame was a decade ago. If they persist in using Ruby and only hiring people with extensive Ruby experience, they're going to run out of talent AND have to pay over market rate to get people in their specialization. Like all the banks and governments forking over loads of cash for COBOL and Coldfusion consultants because the refused to modernize.
> If they persist in using Ruby and only hiring people with extensive Ruby experience, they're going to run out of talent
The best time to become a Ruby programmer or run a bootcamp teaching Ruby was five years ago. The second best time is now.
> Like all the banks and governments forking over loads of cash for COBOL and Coldfusion
Ha.
> learning another language isn't that big of a deal
Actually Ruby has a very specific and long-standing style that is subtle and takes a while to pick up. Any Rubyist worth their salt can instantly spot code that's simply been "ported" over from some other language community. Ruby has a "Ruby way", which in large part inspired the "Rails way". It's frankly insulting to claim you can just pick up Ruby on a whim and it's no big deal. Sure, Ruby isn't hard to learn at first, and I encourage all beginners out there to give it a real try. You won't be disappointed. But Ruby is also a deep language with a rich vocabulary, and it will take you some time to master it. In some ways, if you're an experienced programmer from another language community, it'll take you longer—because you'll need to unlearn the habits you've picked up before that in Ruby might be considered a code smell.
I suggest you spend some time gaining actual Ruby experience, and then next time you apply for a job asking for Ruby experience, you'll have some. :)
Ruby has been steadily going down as favorite language for years. I have absolutely no beef with Ruby, but GP is pretty clearly pointing out the disjunct between continuing to use a language with a dwindling userbase, and requiring that userbase to have professional experience. In its home country, GitLab is one of the few to use Ruby to begin with.
>I suggest you spend some time gaining actual Ruby experience
Which requires those people to get jobs. If they can't get jobs in the language, they can't get experience in the language. This is already an issue with the more popular languages which are converging to the same point.
>if you're an experienced programmer from another language community, it'll take you longer—because you'll need to unlearn the habits you've picked up before that in Ruby might be considered a code smell.
So much this :)
I've been a C# developer for over 20 years and I also write quite a bit of Go. I recently decided to write my NFT app in RoR with backend / batch processes in pure Ruby, and having essentially minimal experience with Ruby it has taken a while to unlearn some of the things that are nearly instinctive to me with (particularly) C#.
That said, so far I am really glad I did what I did. I love Ruby and I think it's incredibly underrated, especially vs Python (though Python is also a great language on its own). Once you really get a handle on Ruby (and Rails), it makes web dev a fun again.
> Also problematic is that gitlab really really cares what programming languages you've worked in before. I've applied and they've turned me down because I didn't have enough Ruby experience, nevermind that I have plenty of years of experience in development and learning another language isn't that big of a deal.
Yeah, I've come across this in loads of companies too. In the end, I think those sort of companies don't understand that once you grok 2-3 languages, the rest of them mostly look/work the same (except the really different ones), and they are just looking for "code-monkeys" to program with them for a year or two.
> But if you know 2-3 closely related languages, one that isn't closely related to those 2-3 can be problematic.
Indeed. Not sure if you left it out on purpose, but my comment included (right after the part you quoted) the following: "(except the really different ones)", which covers exactly what you're talking about :)
I don't think most people simply "accidentally grok" or use Smalltalk, Lisp or Raku; therefore I consider most people don't have what takes to grok Ruby either.
Oh please, Ruby is not some arcane language. It's really not all that different from Python, JavaScript, Java and C# once you get used to your dev environment. Those languages alone cover way over half all the developers.
People here are acting Ruby is anywhere on the same level of difficulty for a regular high-level OOP developer as C++ or Haskell.
> It's really not all that different from Python, JavaScript, Java and C#
I think that being able to dynamically replace behavior from almost anything at any point in your program is pretty much very different from Python, Java and C#.
The only other mainstream language with this capacity is Erlang, and Erlang is not exactly what I would call mainstream language.
And you believe that Ruby both does not make it easy to learn this concept and makes aggressive use of this concept?
I believe you're severally overestimating the impact and importance of one quirk when many developers are picking up wildly differing languages and decently successful at it all the time.
> I think that being able to dynamically replace behavior from almost anything at any point in your program is pretty much very different from Python, Java and C#.
Sure, but it's a very popular language that puts the ability to modify behavior at run-time front and center. It's not like Ruby is special in that regard.
To be fair it takes a ton of memorization to be productive in a Rails codebase, between lack of static types and tons of runtime "magic". Requiring Ruby experience is silly if you've managed to write in several languages productively already—Rails experience, though, I could see requiring that. Recent Rails experience, even.
Which is part of why I've pretty much sworn it off forever, after working on several Rails projects over the years. I never, ever want to onboard to a Rails project again, if I can help it.
It could also be that they didn't want to hire you because you aren't a 'believer' in the technology (in that you're not already experienced with it).
They have doubled down on Rails, so if they hire people who aren't already in the ballgame, they will either be faced with people who want to change things (which creates conflicts) or have to train people to avoid the speed bumps in the existing system (which also pushes people to want to change things).
I'm curious, was your position during the interview that you were beyond excited to pick up Rails as your next technology?
Finally, you never know... their feedback might have also just been a kinder let down... maybe they didn't feel your programming chops were up to their required level. Who knows...
> Like all the banks and governments forking over loads of cash for COBOL and Coldfusion consultants because the refused to modernize.
COBOL itself it's actually very cheap and simple; in fact "so simple anyone can use it because it's just english" /s
What banks and governments fork loads of cash for, it's a highly vertically scalable, performant and redundant system that works (and keeps working) under heavy load and concurrency. A SaaS cloud (including the ludicrous prices you pay for a SaaS when you need to scale your service...) before SaaS clouds were a thing. A mainframe computer with a support contract.
>"Also problematic is that gitlab really really cares what programming languages you've worked in before"
I agree that for experienced developer requirement to have extensive experience with particular language may not be all that important.
But rewriting perfectly working software just because developers knowing legacy language is too expensive is a big mistake. They would waste way way more trying to implement a perfect copy of the old system with "cheaper" developers.
Nobody needs to justify not replatforming, regardless of the involved tech. It's just a sane default. It's when an org _does_ decide that rebuilding their existing property on a different stack that justification is in order.
Oh, yeah. I'm not prescriptive about never replatforming. You just shouldn't do it without a very compelling reason and a lot of consideration.
If you've got a dozen microservices written in a mix of ColdFusion, Salesforce, something Microsofty, and some other enterprise slow-and-expensive platform running on leased bare metal with a bunch of third-party licensed support software and you're burning tens of millions a year on that, and you can get a team to rewrite the thing on another stack--virtually _any_ other stack, so long as you go from a dozen vendors to fewer--you can probably justify allocating your entire engineering budget for a year or two just to stop the bleeding.
Twitter built their stack on Rails, ran into performance issues that they couldn't mitigate, and replatformed. That seems to have worked out for them.
Neither of those are most of us.
Sustainable computing, as you're alluding to, is a whole other discussion. It's one worth having, but I hope there's other solutions out there than "write software the 10x slower and harder way", because otherwise anyone who tries it will be beaten by a less principled competitor.
Short of implementing a Logan's Run-style system of government, the existence of individuals tends to get determined a couple decades before they'd be available for hiring as developers.
To each his own, but the only reason to go with Ruby is in this case is if you already have a large legacy codebase and internal Ruby talent. For greenfield, stay away from both PHP and Ruby.
I've been using Rails for 15 years, and probably written or supported 20 apps with it. In all the reading I've done on it, and all the work I've done with it, I've never used JRuby, nor seen it implied that this was the "correct" way to run an app in production. As an appeal to authority, Heroku's default is MRI.
jruby was definitely the play in 2008-2012, reason being that puma wasn't out yet and everything else was doggedly slow, so you used to use tomcat or glassfish.
Since jruby is being interpreted into java (thus, bytecode), the resulting performance (throughput and latency) was a lot better and the lack of jemalloc (a good allocator by most means!) meant that your memory growth was a little more sawtooth and a little less... growy.
I think these days most folks are just running something like puma.
Aside from the idea of charting random developers' gut instincts regarding entire programming languages, the only contribution this article makes to the literature is the dubious phrase "scalability of innovation." I would have appreciated some honesty regarding why they're really sticking with Rails at Gitlab: a decade worth of development that would have to be rewritten from scratch to little discernable benefit.
> I would have appreciated some honesty regarding why they're really sticking with Rails at Gitlab: a decade worth of development that would have to be rewritten from scratch to little discernable benefit.
This is an underappreciated value of sticking with a current solution. You know the warts. You know the issues. You have significant sunk costs.
The value of moving to a new solution has to be really really high, because there are always surprises when doing so (business logic that you didn't account for, edge cases that only occur once in a blue moon).
In my experience, microservices are incompatible with the universe. All things are connected, sometimes in unforeseen ways.
Almost inevitably, some app or front-end is going to need a joined result from multiple services. If these services are "best practice" independent, they likely have their own data store.You can now bolt on some new architectural layer on top, thereby introducing a new point of failure. Or, you can let the front-end do sequential calls and do the join.
This is how you go from a traditional query in a single data store with a response time of 50ms to the modern equivalent taking multiple seconds. Not just due to the join, also due to the network, auth calls, and other overhead.
But hey, at least the microservice teams can operate and deploy independently. Which is a problem instead of a feature. Services require constant (unforeseen) changes from upstream demand whom now have to negotiate/wait for this "independent" team to be able to deliver.
Everything becomes slower and more difficult: planning, implementing, testing, debugging.
I'm not an absolutionist, I believe there are use cases for micro services but my main point is that reality is not as modular as you arbitrarily have modeled it.
I don't use Rails anymore, but I did use Rails from pre-1.0 through version 3 (and played with later versions a tad).
The best thing about Rails is that it delivers a complete package. Rails doesn't come along and say "Oh, you need to use JS packages and minify your CSS and compile your SASS/SCSS? That's an exercise left to the reader. Good luck dealing with npm or yarn, choosing between Bower/Gulp/Grunt/Webpack/Browserify, good luck figuring out how to integrate Node into your build process, good luck figuring out how to add caching, good luck with async background task runners, good luck sending email, good luck with validation, good luck with testing, good luck with all the things beyond the core competency."
That's one of the things that made Rails so powerful and continues to make it great. They offered good practices for so much of what you need. I don't want to say best practices because I think sometimes Rails has faltered, but at least there was a reasonable path and because it was "in" the framework, people in the community would talk about it - and talk about how it could be better.
I think Rails also pushed the industry toward more structured organization of code (rather than just creating a mess), toward better testing, MVC and other patterns, etc.
I hate picking on projects that I think are mostly good, but to illustrate some things it helps. Ninja Framework (Java) has a section "Advanced Topics" which includes things like "working with relational databases" and "validation" and "testing". Frankly, those aren't add-ons or advanced topics.
In fact, Rails really pushed testing. It's been a while (so correct me if I'm wrong), but all their generators built testing stubs. Sure, you'd have to fill out the tests, but it gave you a place for the tests, a way of running them, and generating the file gave you that little push of "oh, this isn't hard" (which is especially important for newer programmers).
I think Django and ASP.NET are wonderful, but neither really offers a great answer for handling the JS/TypeScript/SCSS/SASS/minification stuff that you really want for a modern web app. ASP.NET points you to LigerShark's WebOptimizer which is good. There are add-ons for Django. But it means that you're left looking around the internet for what is the "right" path which can simply waste time and put people into analysis paralysis. Likewise, there are add-ons for background tasks. ASP.NET has the incredibly popular Hangfire. Django has add-ons as well. These aren't impossible things to overcome, but Rails holds your hand a bit.
Many frameworks don't deal with things like "how do I deal with the fact that I want to test my stuff against a database" and set that up nicely for you. Instead, it's an exercise left up to the reader.
If you're at a giant company where you have a platform team who can handle all these things, you don't need Rails. You have your own teams making this stuff good for you. If you're a start-up with a few engineers, do you want to spend your time figuring out how to compile assets, how to wire up MySQL to your tests, how to run background async tasks, etc.
That said, I do think Rails went in certain directions that have limited it. Static typing has become the way most people want to program. People are less interested in "magical" ways of typing less code and more interested in understanding how code is working. Other languages have become a lot better with lambdas, local type inference, less ceremony, etc. and we've seen new languages like Kotlin and Go. Other languages are substantially faster. Ultimately, a lot of frameworks learned a ton from Rails and copied a lot of the best bits. I still think Rails has a broader vision than basically anyone else and I think it's a bit of a pity that no one else seems to want to have that vision (and please point me to projects you think have that broad vision if you know of them).
Rails really changed web software. Even if you want to use something else, Rails has likely had a huge impact on how you're developing web software (even if you don't know it). I'm grateful for what Rails taught me and how it impacted the industry even though it's been a long time since I've used it.
This is a very insightful comment. I used the same versions of rails and (after reading your comment) realized I appreciate the same things about it. Thank you for articulating it so well.
I too found a lack of strong opinions and recommendations for using Django with TypeScript and modern web technologies. So, shameless plug, I built https://www.reactivated.io . Give it a look if you ever work with Django again.
> I think Django and ASP.NET are wonderful, but neither really offers a great answer for handling the JS/TypeScript/SCSS/SASS/minification stuff that you really want for a modern web app.
Why not? For building my project, I came across the relatively simply paradigm of using standard create-react-app / webpack boilerplate to bundle React/SPA into a .js file and deliver it from a Django template. So you can choose which pages are Django templates and which are individual SPA's.
This is the worst of all of the solutions in my experience. For example, say you have a "card" or a "dropdown" component, now you need to maintain two implementations of it, one for the django templates and another for the React parts. Then when you load a page it will start blank or with small blanks or spinners all around while the components are initialised. Code splitting becomes really complicated as it will have to be mostly done manually, or just loading everything on every page. Then if you need translations, you'll need two systems, the Django one and the one for your "SPA" parts. Some parts of your app will need a backend API, some others will post to controllers... then suddenly due to changing requirements something that was a Django template now needs to be a bit more reactive, so... you rewrite the entire thing in React? or you sprinkle some react here and keep some templating there?... it quickly becomes a mess.
I'd say either stay on the templating world, and use something like Unpoly/HTMX/Hotwire/Livewire, etc.... or go "full frontend framework" with Next.js, Inertia.js. Both options are great in my opinion. The middle ground is very muddy. I've been there.
As someone who has always viewed the winds of web development trends with significant skepticism, this article is one of the best I've read at explaining why no, you don't need <x language/y architecture/z framework>. The technologies that powered the web ten years ago still work just fine. Even the ones that powered the web 20 years ago often still have their place. Keeping your stack boring is frequently the best decision for everyone, save for the career climber who wants to put the latest trends on their CV for the next job.
I like to refer to computing history as much as the next person, but the references in this post [0][1][2] came across as rather weak and mostly anachronistic.
Computer hardware, networking, and server software has evolved by leaps and bounds over the last 5-10 years, let alone 50!
There's a middle ground between microservices and de-coupling. The UI can be written in JS/TS with a backend written in a more scalable faster language. It doesn't need to be all or nothing? Even if you stick to Ruby pulling some things apart due to separate concern/responsibilities is not a bad approach. Aka if you have workers/cleanup operations they don't need to live in the same code as your REST/MVC code base.
Also, it would be nice if their CI/CD could be defined in multiple .yml instead of one giant file that I seem to end up in most projects.
> Also, it would be nice if their CI/CD could be defined in multiple .yml instead of one giant file that I seem to end up in most projects.
It can. You can use `include` to include templates and job definitions from other files. Also, with child pipelines (i.e. trigger jobs), you can run a pipeline defined in a separate YAML file.
As the saying goes in my country, don’t practice javelin in a green house.
The coupling between GIT with the surrounding services and software architecture is just a big joke. Microservices is a terrible match with GIT. Get rid of GIT instead.
They don't really bother to say what their "modular monolith" looks like, besides being "well-structured, well-architected", and tautologically, "highly modular."
We don’t have an app/ directory in our Rails project. All of our code is either in gems/ or engines/.
Gems contain Ruby code that does not depend on Rails. We use ActiveSupport, but we do not use ActiveRecord or ActionPack. The gems are all stateless.
Engines contain Ruby code that does depend on Rails. Persistence happens at this layer through ActiveRecord. API and Web interfaces are exposed at this layer through ActionPack.
Is it a good trade? To me it seems like maintaining a gem/library is less complex than maintaining a service that exposes the functionality of said gem. No networking, deployment configuration, request handling, parsing, host monitoring, logging, access control, etc to deal with.
Right, but then they say "Although structuring GitLab as a monolith has been extremely beneficial for us, we are not dogmatic about that structure." And looking at their source code, they aren't following that very strictly.
Yeah, it has fallen out of fashion, but I still say Java is pretty much always the best option. Java may be less "approachable" than Rails, but 99% of an application's lifetime will be spent in maintenance. And anyone coming in cold to any software written by any human being is going to have an uphill battle. Strict types, rigid structures and verbose syntax are a blessing to anyone coming in from the outside to read your code. The number language features that make Java the best are javadoc and clear stack traces. Anything else is just superfluous.
Also, anyone who thinks maven is too complicated while working in a language that requires bash hacks to manage runtime versions and native dependencies is equally nuts. Maven (JDK really) lets you specify language-level compatibility as a configuration flag. And can hold multiple versions of external dependencies at once without fainting.
Tomcat is an archaic goddamn mess with so many pitfalls and difficulties to get it actually performant on a production system it isn't even funny to make memes out of it.
Not to mention the crap tooling choices (ant vs gradle vs maven), the tendency of Java applications to eventually bloat in RAM consumption over time due to memory leaks or the fact that Java unlike PHP doesn't punish a developer for wasteful development. With Java apps, startup times of minutes are the norm, the worst thing I ever saw written in it was an "enterprise" CMS (!) that, while definitely more capable than Drupal or Wordpress, takes a 32GB RAM machine to develop on, wants a 64GB or more machine as hosting and takes half a goddamn hour to boot.
It's so hard to get it configured right that instead of deploying your app to Tomcat, you embed Tomcat in your app and letting the framework deal with the configuration, like in Spring Boot.
Of course it means you end up with several dozen/hundred/thousand copies of Tomcat...
That's even worse for optimization since now you have to deal with whatever your framework does to get Tomcat up to speed and how to set a configuration option for Tomcat in a way that eventually ends up at the correct place.
And now I have to deal with this supremely crappy Spring Boot which would convert compile time Java errors to runtime exceptions. Turn already verbose 100 lines stack trace from Java into 5000 lines of Spring infested stack trace. I guess I just couldn't praise enough this Spring Boot.
The only four things that I remember that ended up with crappy run time errors that totally should be compile time Java errors are none specifically related to Spring Boot, they're all generic "Java things":
* people using Win boxes for development and therefore antivirus locking files in innoportune times during compilation/packing;
* people using the Eclipse Java Compiler, a IDE compiler designed to be useful with bad Java;
* people trusting themselves writting XML annotations instead of Java annotations, and not unit testing the code (this lack of unit tests alone probably means the code literally did not even work on their machine);
* people using the existential, compile-on-runtime terror that is Java Server Pages.
I smile not only because you are right but also because tomcat looks like sweet little server compared to next level of crappiness peddled by IBM/RedHat/Oracle etc which I have misfortune to deal with.
I have just watched an ops team say "hey there's a security problem with our version of Tomcat, we need to update it", and then nothing works anymore for a few hours until they get every problem.
My observed odds of a point version upgrade on a normal web server breaking something is 0%. That includes Microsoft IIS, that is a patently known piece of shit.
My observed odds of a point version upgrade on Tomcat breaking something is 100%.
If you care abut hiring and recruiting then you would kill your Ruby On Rails projects.
In fact, as far as recruiting goes, you really need to switch all your projects to either TypeScript, C# or Python. Everything else, Java, PHP etc etc is on the decline.
POSSIBlY on the way up is Golang, but again don't use this is hiring is a primary concern.
BUT Ruby On Rails has to be one of the worst to recruit for unless you're using Fortran or COBOL.
> You need a fairly sophisticated DevOps organization to successfully run microservices
Yawn. Stopped reading when I reached this. One can make the case for their choices without parroting the same old tired arguments against alternative choices. Shallow comparisons between Java to PHP to Ruby is lazy, and very 2010. If I were in the CEO's circle, I wouldn't have recommended this be published. Bad look for Gitlab.
One of the most effective and succinct criticisms I've seen of microservices: an architecture hack for modularity.
The irony is, it's rarely an effective hack. If you don't get your service boundaries right, modularity goes out the window, and you're stuck with a bunch of network calls that should be function calls.