Hacker News new | past | comments | ask | show | jobs | submit login

> One of my biggest learnings from doing a bunch of web MVC through Rails over the years is that the framework should heavily discourage business logic in the model layer.

I am curious where this comes from, because my thinking is the absolutely opposite. As much business logic as possible should belong in the model. Services should almost all be specific more complex pieces of code that are triggered from the model. Skinny controller, Fat Model, is the logic of code organization that I find makes code the easiest to debug, organize, and discover. Heavy service use end up with a lot of spaghetti code in my experience.

The other part is that from a pure OOP pov, the model is the base object of what defines the entity. Your "User" should know everything about itself, and should communicate with other entities via messages.

> Don't allow "callbacks" (what AR calls them) ie hooks like afterCreate in the data model. I know you don't have these yet in your ORM, but in case those are on the roadmap, my opinion is that they should not be.

This I agree with. Callbacks cause a lot of weird side effects that makes code really hard to debug.




    > I am curious where this comes from, because my thinking is the absolutely opposite. As much business logic as possible should belong in the model.
The opposite of this is what Fowler has called an "Anemic Domain Model"[0] which is ostensibly an anti-pattern. What I've learned from my own experience is that with an anemic domain model, the biggest challenge is that the logic for mutating that object is all over the codebase. So instead of `thing.DoDiscreteThang()`, there could be one or more `service1.DoDiscreteThang(thing)` and `serviceN.DoDiscreteThang(thing)` because the author of `service1` didn't know that `service2` also did the mutation.

Domain models are hard to do well and I think the SOA era brought a lot of confusion between data transfer objects, serialized objects, anemic domain models, and domain models.

[0] https://martinfowler.com/bliki/AnemicDomainModel.html


>the biggest challenge is that the logic for mutating that object is all over the codebase

Just use immutable data structures and be done with it. In departing from old OOP views and becoming more functional programming and data oriented programming friendly, C# introduced Records, which are immutable. Probably Java and Python have similar constructs. Javascript allowed the use of immutable data since long time ago.

If you insist of using fat models, you will still mutate the data all over the place by doing calls, but you just obfuscate it.


> Probably Java and Python have similar constructs

In Python, the closest you can get is a "frozen" dataclass, but you don't get true immutability[0]. What you _do_ get is effective enough for just about all practical use cases, except for the standard foot guns around mutable data structures.

    @dataclass(frozen=True)
    class MyModel:
        ...
[0]: https://docs.python.org/3/library/dataclasses.html#frozen-in...


You can redefine the byte representation `True` corresponds to in python. "Immutable enough" is all you're really looking for; it somebody goes out of their way to mutate the thing then they probably had a good reason for it.


Ahh, Fowler. The author that gave the World such gifts as Dependency Injection, Inversion of Control and other over-engineered "patterns". This is just my opinion obviously, based on experience spanning from the early 90s.


That's like blaming Fleming for the antibiotic crisis. Just because you have a pattern, you shouldn't use it preemptively.


Agreed, although the Java culture took the patterns and applied them in a cargo-cult frenzy. I do think the likes of Fowler and the so called Gang of Four are to blame for many of the Sun's later mistakes in API design and for the culture of patterns-everywhere in that era.


Imho, mutating the same object so many times, that a developer can't easily infer already applied changes is also a strong code smell. Fat models tend to encourage it, since all the mutation logic is available to all the services.


There are ways of getting around this. For instance, the "mutating" code can be organized in the service layer in a single location.

For instance, if you are updating a ShoppingCart model, all of that code which creates/updates/deletes a ShoppingCart could be kept in the ShoppingService - which will also create/update/delete the ShoppingCartItem models which are the line items for each item in the carts. So you don't have one Service class per table - but rather one service class per module of functionality.


The pattern is not OOP but that hardly makes it an anti-pattern.

Personally my take is business logic should be in the services and object specific validation in the like can be in the model. Unless your business logic is meant to deal entirely with single object types at a time you can hardly fit it in the pure OOP dogma. A behavior that deals with ModelA and ModelB seems just at home on serviceAB as it does on either model, from an OOP sense.


I tend to draw the line at intrinsic vs extrinsic behavior. The model layer must be able to maintain all intrinsic properties. Whenever it would talk outside the application, it's beyond the domain of the model.

Taken to the extreme, you could model all intrinsic constraints and triggers at the relational database level, and have a perfectly functional anemic domain model.


In our model we have "repositories" (they dont talk outside the application, they basically contain queries related to a specific db table), and "services" (they call models, do queries that we not related to a specific db table and may talk to outside the application).


> As much business logic as possible should belong in the model. Services should almost all be specific more complex pieces of code that are triggered from the model.

In my experience with fat models is that it works for trivial cases. Once you get more complex business rules that span multiple models, the question becomes which model should it be implemented on. For example in a e-commerce app you might need to evaluate User, Product, DiscountCode and InventoryRow tables as part of a single logical business case to determine the final price. At that point it doesn’t make much sense to implement it on a model since it’s not inherent to any of them, but a PriceCalculator service makes sense.


Exactly how we do services.

We have one model file per db table (a "repository") in which we define all queries that "logically belong to that table" (sure they do joins and/or sub-queries that involve other tables, but they still "logically belong to a specific table").

Once we need logic that combines queries from several "repositories", we put that in a "service" file that's named after what it does (e.g. PriceCalculator).

Most of our business logic is in the models (repositories and services), other encapsulated in the SQL queries. Repositories never call services. Model code never calls controller code. Pretty happy with it.


When you join two tables, which model does the query belong to?


We'd not call it a model, we have no notion of "a model", merely a package called "models" (in line with MVC separation).

We do have repositories. And when joining it could belong to both tables, and thus to both repositories. In those cases the devs picks one: the one that it most prominent to him.


This sounds to me like the standard OOP versus Data Oriented programming divide. You want to think of code as a bunch of bundles of data and associated functionality, GP wants to think of code as data models and services or functions that act on them.


Business logic should sit in the domain model, but not the orm model. The domain model should be an object that is not coupled with the web framework. In the Clean Architecture approach this is called an Entity.


This is the critical difference.

One of the simplest examples is that you could have a Login domain model that handles login-related business logic, that mutates properties in the User ORM model.

All your login-related business logic code goes in the Login model, and any "the data _must_ look like this or be transformed like that" logic can go in the ORM model. If some other service wants to do anything related to the login process, it should be calling into the Login domain model, not accessing the User ORM model directly.


> "the data _must_ look like this or be transformed like that" logic can go in the ORM model

I would rather implement the Repository pattern and leave the poor model as a plain data structure.


What's a difference between this domain model and the service then? In your example you'd have a Login service and all the code related to login would have to go through the Login service, right? Why do you need the additional domain model layer?


I think the ORM (with Entities) is an anti-pattern. It makes simple queries slightly simpler, and hard queries impossible to express: hence you will need a way to express hard queries.

Also Entities are usually mutable.

What clean architecture prescribes here VERY bad for performance. Some of your business logic will dictate how you write your queries if you care for performance.


>Business logic should sit in the domain model, but not the orm model.

Business logic should sit in a business layer.


We recently migrated from fat controller to fat model. We found that fat model makes the code a lot clearer and is much easier to test behaviour.


I'd argue that you shouldn't use a fat model, either. To me the best way is using as least code as possible in controller, no code at all in model and having service layers that take care of business logic, and layer for talking to the database.


Talking to the db should contain a lot of business logic if you want performant queries. I'd say the "service layer" and the "layer for talking to the database" (repositories) are all part of the model and all contain business logic.


A model should as closely as possible represent what it is (a table in a DBMS), not what it wants to be (the thing that the table is representing).

Otherwise you have two models, the model in your web framework and the model in your DBMS.

I would take this a step further and suggest that the term "model" is unhelpful and should be eliminated and replaced with the term "table" which is much more grounding.


The "M" is just a package, a grouping in the structure of your code.

I agree there is no "a model", it should be "a record" or "a DTO" or "a repository" (which contains the queries to a particular table), or "a service" (that contains logic that calls several repositories).

The idea of having "a model" it closely coupled with the us of ORMs (which are an anti-pattern IMHO). They provide "models" or "entities" that try to be too much (wrap over a db record, contain logic, can back a form submission -- breaking the single responsibility principle on all counts).

I feel like "clean architecture" is trying to fix this, but only makes it worse.


It's because people ended up with models that were thousands of lines and difficult to reason about. Out of curiosity, did you end up running into this issue and how did you deal with it?


I work on a few projects that do have a model that is over a thousand lines long. A lot of times as the model gets more complex, you start moving associated model logic into their own models, which helps reduce the problem space. I think its fine because the logic ends up being cohesive and explicit. Whereas services end up with logic being hard to track down when they get very large and usually scattered.


If I had to choose between thousands lines in models and thousands lines in controllers I'd definitely take "fat" models over "fat" controllers.


In general, I think 'unit test' level business logic should be in the model (think configuration for data-driven workflows, normalization logic, etc) but 'integration test' business logic should be in a service (callback logic, triggering emails, reaching across object boundaries, etc).

I think most people agree about skinny controllers but I've definitely seen disagreement on if that gets moved to fat models or service objects.


> This I agree with. Callbacks cause a lot of weird side effects that makes code really hard to debug.

Also Django signals, Symfony events... makes things extensible but also hard to debug indeed.


attach a debugger to the running process


Such a simple thing, but so many organizations love to set up their projects in ways that make attaching a debugger surprisingly tricky.

Even the most basic text editor and pretty much every language support interactive debugging - but if you set up a bunch of docker containers in a very careless way, you end up introducing a layer that disrupts that integration. It's fixable, but for that you need to think _a bit_ about it, and most devs I meet these days are like "eh, why do an interactive debugger, print statements exist" (and then be like "oh no signals are hard to debug :(").


"debug" was a poor choice of word on my part. It's not about debugging, more about following the logic when the program is read by a developer.


That's fair enough, though again, interactive debugging can really help with understanding what's going on by just stepping through the call as it happens - just click "debug" on the test and play around with it.

But I'd agree the issue is real, and we're discussing mitigation of it, and whether it's sufficient. It's definitely possible to turn your code into aspect-oriented programming hell with careless use of signals, hooks and the likes.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: