Hacker News new | past | comments | ask | show | jobs | submit login
Happiness is a freshly organized codebase (slack.engineering)
184 points by felixrieseberg on May 7, 2020 | hide | past | favorite | 90 comments



The concept of linting repo structure is an excellent one.

I cannot even get started in projects without a sane repo layout. Source files scattered everywhere unrelated to each other, utils junk-drawers with dozens of files, multiple top-level source directories without explicit rationale, tests and source totally disjoint, hacks to modify build and run paths, implicit dependencies between directories, awful convoluted build-system configuration to match all of these idiosyncrasies, and impossible or very difficult editor/ide integration as a result. Even rails-eque apps (which come with a reasonable structure) get really messy really quickly if somebody doesn't have the diligence to stay on top of it.

I want there to be a {sane, polyglot, large-ish-scale} project layout convention that plays well with {intuition/discoverability, build-systems, editors, project size}, but it seems there's an inherent tension between these. Maven tried. Kinda. I wonder if there's a fundamental problem we aren't solving somewhere.


I'm a little disappointed we haven't collectively stolen 'mis-en-place' from professional chefs.

Working in chaos is just masochism. There will be enough chaos you can't control, don't add accidental chaos to the system.


Ex-cook here, more than 10 years. Learn about mise-en-place and implement it in your work. It will be painful as hell but you will begin working far more quickly and accurately than your coworkers and it will likely end up with you leading.

I see so many similarities in the jobs. Great mise-en-place makes a great cook just the same as a great dev. Remember though, even the best cooks have to chop garlic during service occasionally.


I hear you. It's a really good metaphor, but it seems to get strained when your 'mis' is more than 20 ingredients.


I learned mis-en-place from a bike mechanic, and 20 'ingredients' isn't even table stakes for a mechanic (unless you count each wrench or socket set as 1 tool, and then we're close).

There were only two 'adults' in that shop - the manager, and Bob, the senior mechanic, and I'm pretty sure both of them were maybe 28. The rest of us were college age. He was constantly irritated that we couldn't seem to keep our shit organized. Most of the time this was some muttering or making little jokes at our expense.

Until you wanted to borrow one of his tools because you couldn't find yours, then you got an earful.

I was pretty sure I didn't like Bob at first, but now I'm not so sure.


Your "putting" is more than 20 ingredients?

(I do get your meaning, I just found your phrasing a bit funny.)


Also, French is much more the lingua Franca of cooking than it is software engineering.


Perhaps we should hard-code a rule (make it a cross-language industry standard) that says that the dependencies of a module should always be listed with the module (in a fixed format) and the build system doesn't allow the use of anything else (as a verification step).

That way, we could have a tool that always correctly shows the dependencies between modules.


While we're at it, build semantic versioning into the package repository. For instance, if you push a version 1.1.3 that has a different API than version 1.1.2, the repository should just outright deny the push and require you to publish this as version 1.2.0. It's a little annoying on the part of the library author, but would make all consumers of the library much more confident in upgrading.


In maven I don't like the test-separation from the code itself because its much simpler to find and update them this way.


There's a lot to not like about maven. I think that separation was motivated by java's classpath rather than making a 'good'/dev-friendly convention. After 10 years in maven-land, I changed a recent project to put `foo.ts` and `foo.test.ts` in the same directory and its wonderful, but it confuses some build-systems and makes certain kinds of test-helpers harder to write.


Maven is my favorite thing about Java...


Anecdotally, I like that separation because in absence of it working with any large-ish project (which most real life projects are), it'd get super hard for my brain to pick out tests even if they all had certain prefix/nomenclature.

Caveat: I use IntelliJ and I just jump to tests using CMD+SHIFT+T shortcut so not necessarily a generic example.


On the flip side, reorganizing a codebase totally fucks the git history.


Just make sure to use 'git mv' for all your files so the history can be tracked. 'git log myfile' will only show the history for that file, but 'git blame' will follow the line changes across file moves.

That aside, there are some many productivity and tooling gains around organizing your project properly, that it is worth history discontinuities.

Also, I've found that the worse the project organization is, the less likely the team is to separate out different fileset changes into different commits to keep a useful history, or make the use of the git history at all outside of looking at a few recent commits or or sometimes looking at the last release. I've heard this as very circular arguments. Can't re-org because we loose history. Can't maintain good history because the project is too disorganized.


Has behavior for `git mv` changed? Last time I investigated, the command was convenience to `git add` after a `mv` with a single command. Git doesn't actually explicitly track renames, it leverages some sort of auto-detection based on file content similarity.

Still good to use the command, just keep in mind git isn't really treating it any different than a normal `mv` followed by `git add`.

https://git.wiki.kernel.org/index.php/GitFaq#Why_does_Git_no...


This might not be the forum for it, but I have a question about this - I thought git was supposed to be pretty smart about detecting renames, but in my experience (on Windows) it never does, and I always have to use `git mv`. What's the deal with this?


Even if rename detection is enabled, git will only try to identify renames if it wouldn't take too much time (since IIRC it's a O(#files^2) operation).

Thus, there's a config "diff.renameLimit" and "merge.renameLimit" that tells git to only detect renames if less than N files are affected.

I have personally been burned by this before, and there usually is a default value set in most git clients.

See also some discussion at https://stackoverflow.com/a/7831027/1237375 .


It does work, check what happens after you git add it all, not before.


'git log --follow' should try and log following back through the rename. It can be a bit slow with big repositories though.


If you do it all in one swoop it's not too bad. The difficult part is managing everyone's "perfect structure".


hg move $thing. I’m bummed git “won”


I think you can find edge cases in either tool where the data exists to make something work, but the data model doesn't allow it.

Consider a file like "package foobar; func DoFoo() { ... }; func DoBar() { ... }". You then refactor that to two packages, "package foo; func Do() { ... }" "package bar; func Do() { ... }". The body of foobar.DoFoo is identical to foo.Do, but no version control system I've ever used will move the history of that function body to the new location.

I guess what I'm saying that tracking renames as a first-class object is nice, but it still doesn't represent many types of refactorings. To smugly declare perfection, a version control system should probably have that feature, because it is a relatively common refactoring.


Of course a tool that tracks the history of file changes doesn’t do your example. That’s not an edge case, it’s a flawed argument contrived to try and make a point, albeit poorly.


`git mv` also exists.


Bazel’s ‘visibility’ attribute for build targets can be the foundation of a repo structure linting solution.

If maintained properly, visibility settings can stop a bad code structure move in its tracks.


I feel like a lot of the pain in codebase organization boils down to having a technical project structure (i.e. layout on disk) that does not align well with the business. Obviously, it's impossible to force something to directly align with such disparate and abstract requirements, so you have to create abstractions (layers) to enable a hypothetically-pure realm.

In our architecture, we've created roughly 2 different kinds of abstraction: Platform and Business.

The Platform abstractions are only intended to support themselves and the Business abstractions. These are not supposed to be easily-accessible by non-technical folks. Their entire purpose is to make the Business abstractions as elegant as possible. The idea is these should be slowly-changing and stable across all use cases. Developers are only expected to visit this realm maybe once every other week. We effectively treat ourselves as our own customer with this layer.

The Business abstractions are built using only first-party primitives and our Platform layer. Most of this is very clean because we explicitly force all non-business concerns out into the Platform layer. Things like validation logic are written using purely functional code in this realm, which makes it very easy to reason with. Developers are expected to live in this area most of the time. Organization of business abstractions typically falls along lines of usage or logical business activity. We explicitly duplicate some code in order to maintain very clean separation of differing business activities. I've watched first hand as the subtle variances of a single combined Customer model across hundreds of contexts eventually turned it into a boat anchor.

As a consequence of all this, our project managers and other non-code-experts are actually able to review large parts of the codebase and derive meaningful insights without a developer babysitting them. We are at a point where project managers can open PRs for adjusting basic validation and parameter items in our customers' biz code. Developers also enjoy not having to do massive altitude shifts on abstractions while reading through 1 source file. You either spend a day working platform concerns, or you are in the business layer. Context switching sucks and we built to avoid it as much as possible. IMO, having a bad project structure is one direct side-effect of being forced to context switch all the time.


That's a brilliant way to separate concerns! I've found it's easier to understand a code-base when grouping src files by feature, rather than by file type (html, js, css).

But I end up with a strange mix of platform features and business-case features. I can't believe I never thought to simply separate these too distinct types into different parent folders...


I would definitely read and share a blog post about this that has an example!


Having a core+utils area and then separate feature (i.e. business) areas I think is another way of saying this, it's not entirely uncommon.


Wow, I love this idea. I'm curious how you have that structured within the code base. Is it literally a platform folder and a business folder?


Pretty much. We have an issue in our backlog right now that will pull the platform concern into a completely different project/DLL from the rest of the biz code. We still have a little bit of coupling, but we are 99% of the way there. Our long-term goal is to produce a company-common platform layer that can be used to build a wide range of final products. Most business applications that we would build share a lot of common concerns - namely how to manage business state, client views and transactions with external business systems. This is all implemented in various services within our platform layer so we rarely think about it. When we integrate with a 3rd party vendor's API, it goes into platform so anything can now use that integration.

The crazy thing I've come to realize is that the journey doesn't have to end there either... You can build yet-higher-order abstractions on top of your platform layer (i.e. a platform for platform). I don't know where this all ends up, but 1 or 2 iterations of it has been extremely healthy for our architecture and business use cases so far. We are now able to chain together extremely complex business processes in ways that can be reasoned with in a single POCO mapper. Without a separation of the "noise" of platform-related and other lower-order code from the business code, it would become impossible to see these opportunities.


Bob can you give more details about this or is there a way to contact you? I am genuinely curious about this type of architecture and would love to learn more. If you would rather respond here could you give two simple, concrete examples of each case?


Based on the responses here, it is apparent that I should spend some time documenting this concept in more detail for the greater good. I do not have the bandwidth for this right now, but perhaps in a few weeks I'll have time to put together some realistic examples for a proper Show HN submission.


Do you have some boilerplate or some demo published anywhere ? Seems like a very neat concept.



I've encountered similar difficulties around codebases with a lack of file hierarchy structure. But one major difficulty in fixing the issue is that moving a lot of files around tends to trash `git blame`, which is often more valuable than knowing what folder to put a new file in. Is that something you've encountered?


There are workarounds to get git to search harder or to commit things in a way that's helpful for large-scale file-shuffles, but to be honest I've rarely found this to be enough of a reason to not move things around. The end result is a much more productive and purposeful place for code to live and grow. This said, moving files and moving code within those files at the same time is a recipe for confusing yourself and git. Move files first, then content.


A few options to look at specifically: --follow, -M, and -C. Less related to refactoring I also find --first-parent and --merges useful


Try putting the refactoring commit in .git-blame-ignore-revs


Note that .git-blame-ignore-revs is just a convention, git doesn't automatically use the file like how git treats the .gitignore file. You can add it to the repo configuration though, so you don't need to pass --ignore-revs-file every time, provided that you're running Git 2.3+[0].

    $ git config blame.ignoreRevsFile .git-blame-ignore-revs
This is extremely useful if, for example, your team decided to introduce new automated style changes, and didn't want to completely clobber the history for blame. By specifying the commit that introduced the style change, actual "interesting" changes should be correctly attributed and the style change commit is ignored in the blame output.

[0] https://github.com/git/git/blob/master/Documentation/RelNote...


On codebases that have had this sort of mass file-moving, one tactic I've used is keeping a link to the commit prior to the file-moving. Then I can check out that version, manually browse to the file's prior location, and use git blame. It's manual and tedious, but it works.

Perhaps one mitigation could be to include a message describing something like this in the git commit for the mass-move.


can you actually make a symlink to a commit in .git/ ? neat...


You can in mercurial. Should you... different question.


Use Mercurial? :)


This is the correct response. Using mercurial is like learning lisp after a c-like language. I don’t hate git, but I sure don’t like it either.


I sometimes wish Git properly records copy and move information. It turns out Git's heuristics for detecting copies and moves work about 95% of the time, and for the remaining 5% it's mildly annoying to read a git blame with every line from a move. You can blame further across that move, but that's manual. If copy and move information is perfectly recorded, I'd have more incentive to do these kinds of code reorganization.


This article mentioned a while ago had relevant info: https://news.ycombinator.com/item?id=22689301

In my experience, ensure you don't change the code as you move it, don't rename files and move content at the same time, and don't move "too much" code in a single commit. Around 2k lines at a time seems to be a good number. Maybe some languages/structures are easier for git to analyze when doing blames.


How do you not change the code as it's being moved, when you need to update #include directives, imports, and other file paths?


It's easy if not every commit needs to compile. :)


Sometimes it is unavoidable, but in general minimize it. Don't refactor a class and rename it at the same time. Hopefully moves just result in imports and paths changing, something that is not likely to confuse a blame (and rarely do I care about the blame lines on #includes/imports)


New branch, make two commits. Squash merge the branch as a single commit.


'git mv <source> <dest>' works pretty well for this.


It does not. That's the same as doing a normal mv followed by git add. It does not record the fact that you moved it. You can verify this by moving a file and making big changes to it in the same commit. Git will forget that you moved it.


If you `git mv`, then immediately commit (without making any additional changes), I think it will recognize it as a move 100% of the time (i.e., will not use its "similarity index"). I'm not certain of that though.


That's correct, but this also applies to "move file" + "git add". Just make sure to commit the "move" and "change" parts separately.


Sure, but if you mv and git add without making any additional changes and then immediately commit, it'll also get picked up as a move 100% of the time.


Interesting read!

> No more dumping ground folders like “Helper” or “Utility”

If you’re organizing by feature and you have one of these helper/utility classes that support multiple features, where does it live? Would you consider each utility to be its own “feature”?


I actually think it's vitally important to have "dumping ground" locations for things that aren't fully figured-out yet. If I'm working on a new thing and I have something that's relevant in multiple files but I'm not sure where it should go, or even whether it will stick around, I don't want to have to come to a screeching halt just to make a taxonomic decision about something that's still a work in progress. The key, of course, is going back and categorizing those things later once you do have an idea of where they should go.


Agreed. I've been using the idea of a "shame file" in css as a dumping ground for this sort of thing.

It was brought to my attention by this post https://csswizardry.com/2013/04/shame-css/

And yes, there needs to be a structured and scheduled review and cleanup otherwise it becomes a nightmare. We do this every 3 months along with scouring the codebase for TODO's and FIXME's. It's done in mob-programming style with drinks which makes for a lot of fun and a good way to share our "mistakes". But to be honest, it's starting to devolve into a "coding roast" of sorts!


Years ago on a large project with many trainee and junior contributors, I had a sandbox.css as the last css file.

Juniors could try out new things and add in naive fixes for bugs. Then in code review we brought the change up into the cascade as a method of teaching cascade thinking.

It's rare to do that now as most front end devs are not interested in the cascade and use things like BEM instead. I can understand why - it's pragmatic - but I did personally find that a great way of building up devs who really grasped css on its own terms.


Totally agree. So much of programming and designing is figuring out what pattern your code is falling into and what stuff "is" or "means".

I believe it's totally okay to name things utilities when you haven't yet established a common pattern or understanding of it. Naming things is hard and spending so much time on it and organizing things can often block you from progressing to a place where you DO have more information and can intelligently name things.

You'll always be juggling unknowns, so it's okay to have dumping grounds here and there, so long as over time you clean them up as you gain more info.


I recently started working within a Python codebase, and one of the things I really like about it (not sure whether this is standard Python practice) is that most directories have a "common.py" file in them. So if you just want to put something somewhere real quick, you can elevate it to exactly the appropriate directory-level instead of going to a single, global "utils" file. It's a neat pattern.


Or even using helper/util files as a place for methods that shouldn't be shared or reused. If I write a retry helper to get over an async waiting period then I put that retry method in the helper because it's so utilitarian and contextual I don't want to encourage reuse of said method. If eventually it can be replaced that's good but a dumping ground is a must.


> The key, of course, is going back and categorizing those things later once you do have an idea of where they should go.

Yes, but in my experience this is rarely done. It's put off until it gets so convoluted that it introduces bugs. The same is true even if you do avoid dumping-grounds: you need to examine the project's state somewhat regularly to ensure it still makes sense. Where code lives can be as important as the code itself and thus deserves at least occasional review and change.


It depends on how many projects you have. A place like Slack has only one client/product really when it comes down to it, so they can get pedantic about it and look down on smart reuse across projects with robust tested libs on top of standard libs that are necessary to increase production and stability/security etc.

If you work at an agency, or game company shipping many games, there will always be a "core" library or "base" library that has Helpers and Utility. Good consistent projects wanted tested and solid parts. Every game has a common lib of core tools like maths, vector tools, prediction, data structures and many more.

There is such a broad difference in coding on one platform for one company for one product compared to shipping many companies (or many internal projects) on many platforms for many products.

Even standard libraries for platforms are essentially a Helper/Utility really when it comes down to it: .NET Core, Python standard lib, standard node, C++ distributable etc. There will be common core tasks across a project, product, company or platform for most teams with many projects to manage.

If you don't have common libs with common helpers/utilities in large sets of products, technical debt and maintenance become a nightmare. In projects we work on these are core/base libs that are submodules in git and are essentially the 'tech' across projects where the project/product itself is the unique implementation for that product. Anything common or generic gets put in the core 'tech'. Every single game studio will have these as well as agencies if they are organized and produce quality relatively fast and consistent.

If "Happiness is a freshly organized codebase" they must have common tested parts that are ship tested.

This is really just Slack talking specifically about mature products at a company that only has one product. It isn't reality at places that have many projects, products, companies, clients etc. Pretending they are the same is a bit elitist.


We discovered a lot of cases would be unique, so we do try to evaluate a "best" case when it comes to each move.

Keeping that in mind, if a feature has a shared helper across components within the /Feature folder, we'll have a /Shared folder within to capture these files. At a top level, we would encourage a new /Feature folder since most likely it will have tests or be important enough to merit a folder rather than squishing it into another /Feature folder. We're optimizing for visibility too, so having it's own /Feature folder helps with that.


Thanks! And am I correct in assuming Slack just has a single target for the iOS app (App), then one for each App Extension? So the root source folders map 1:1 to build targets?


Yes! that's the idea, or at least what we've worked to achieve


If they don't have anything in common, yes. If you have one collection helper method, one time helper method and one auth helper method, far better to put those in a collection class, a time class and an auth class than bundle them up as a "utils" class.


I always argue adding "helper" or "util" adds nothing to the description of a file/module/class and is better left off


I feel like these names convey a lot, though. When I see something like this, I immediately expect it to not do any of the following: (1) contain core logic or define primary classes/types/records; (2) have dramatic side effects; (3) be hard to test; (4) reference or depend on other parts of the codebase; (5) do anything controversial in general.

I’d say I’d expect a “utility” to adhere to these conditions more strongly than a “helper,” which might be a bit more entangled with application logic.


Utility - few or no dependencies. Absolutely no dependencies specific to the project. (array flattener, 2-way map, etc)

Helper - function which makes some really common project specific code more DRY (createApiError, genCommonConfig, etc)


I'd also consider a Helper to be something that may have application-awareness. Whereas, as you stated, a Utility would have no serious dependencies and is otherwise "dumb".


For new projects (especially as a solo developer), there's also a related topic of not taking advantage of tools like kanban boards, or having a place outside of your source code to organize your thoughts and research.

It's very possible to wind up with massive comment dumps of things to research, alternate implementations, notes to yourself and other things littered in your code base where you haven't made any git commits yet.

This really leaves things in a messy state where you feel like the project is never going to be finished.

An example of this and how I solved this problem with a kanban board can be found here: https://youtu.be/HHOkcCqsipE?t=76


I've been wondering a lot recently if folders make projects better or worse.

For example, when I write a library its usually very simple. There's a single directory which contains all of the source files. When people use the library they:

"""

import lib

lib.run()

""""

Dead simple, no complex module paths to remember, no hierarchical folder structure forcing you to code based on a pattern rather than functionality. Pure bliss.

But on the other hand, I have projects that contain 100k lines of source code. I can't just leave it out in the cold. So poor baby gets a couple of folders.

But I do hate it. I hate writing the code. I hate reading it. I hate finding it 6 months after the fact.

That's probably just the nature of the job. It is work at the end of the day. Maybe its just doomed to be hard.


I'm not plugging the language, but I've come to appreciate OCaml's module system with no imports and (mostly) globally unique module names. No circular dependencies allowed either. You can have multiple modules within a file (which is also a module named after the file name).

I structure larger projects as libraries with minimal dependencies that depend on one another, and dump all my modules with descriptive file names under the same directory within the library.

I vaguely remember reading something that hinted at Facebook doing something similar with their React components.


There is this wonderful utility: dependency-cruiser[0] for javascript / typescript projects.

It visualizes dependencies in a project. I found it so so easy to refactor and move files around after I started using it. I am not usually a visually-oriented person, but for this usecase, and to be happy, `dep-cruiser` surely helps.

[0]: https://github.com/sverweij/dependency-cruiser


What if we just got rid of files and put our code into a database?

You'd just have a "new code block" button, which creates the editor tab for your code, usually a function or a class, and usually one item per block. When you save it, it puts it into a database and you can version things easily. You can call other functions from the block and your editor will show their code when you mouseover or maybe some other method, just like today. Basically the same as today, but you don't need to worry about where some code lives. You back it with a great search feature to find stuff.

Hell, let's just eliminate pathed files, why do we care about file paths with the level of search today? Just store everything in a key:value store directly on the hard drive, no paths needed. For legacy, just add keys for '/etc/fstab' or whatever.


>"The Slack iOS team lived in these conditions for a few too many years. We got here as a result of some attempts to organize source files (several times), a lack of architecture pattern in the codebase, and a high growth of developers over a couple years. To put things into context, we have roughly 13,000 files (and counting), about 27 top level directories, a mix of files in Objective-C and Swift, and around 40 iOS developers that work in one monorepo."

An extensive, unrefactored codebase is no different than a jungle.

You might have a 10'xer programmer on your staff, and he might hold the programming equivalent of a machete, but if the rate of his refactoring (assuming your corporate rules let him) is slower than the rate of new code being added by other employees, he is going to fail, no matter how good he is!

I need to write a future essay about the relationship between 10'xers and how a 10'xer is a combination of only as good as how well the codebase is refactored, how well they know the codebase, how much corporate rules/polcies permit refactoring (or not), how much time he doesn't have to waste time solving stupid one-time issues from single customers, and how much help or pushback he is or isn't getting from the rest of the team.

In other words, given the right set of conditions between codebase size and obfuscation, limiting corporate policies (i.e., "you can't refactor", "you can't make just one mistake in your code, because it's all mission critical, and if you do, you will be fired, and by the way, there's no test environment!"), distraction ("you have to help this customer with his cosmetic problem before you are permitted to tackle the guts of the system"), and pushback from the rest of the team, you can actually change 10'xers (and higher!) to 1x'ers and below...

The reverse is true too...

I'll make anyone a "My Fair Lady" / Eliza Doolittle style bet (or the reverse!) that what I say is true!

That is, that 1'xers can be taught to become 10'xers, and conversely, 10'xers can be hampered by a variety of factors ("The Perfect Storm") resulting in them being slowed to 1'xers, or below...


Danger is a tool we integrated into our Continuous Integration system that performs post-commit automated checks

You mean like the post-commit hook that Git offers out of the box? It's even named the exact same! I feel like we don't focus all this time on fast build test and deploy cycles only to then commit, navigate to some website to create a "merge request", wait for some fairy to allocate computing resources for trivial checks my computer could have done, only to get some pure noise "don't put this file here" comment and repeat the cycle all again.


When did you think I was going to get coffee and/or check Slack for more annoying work to do? It's like you want me to get things _done_, which is not fun at all.


In Go, there's a standard project layout [1]. It'd be nice to have a project layout linter in Go Report Card [2].

[1]: https://github.com/golang-standards/project-layout

[2]: https://goreportcard.com


Just to clarify, that’s not an official standard at all (i.e: not supported by the go team). It’s a community project to establish some common layout.

Surely valuable, but to say it is actually a Go standard is misleading IMHO.


Ah! I stand corrected. Mea culpa.


Needing organized codebases is a personality type as there is zero academic research I’m aware of that has ever shown strict organization has resulted in less bugs or faster development.

These are all done by people that need to feel in control and that everything has a place. So much money and time has been wasted on stuff like this with zero proven or measured benefit.


There is famously also no academic study showing that parachutes improve outcomes when jumping out of a plane.


There is a study for that! "Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial". [1]

[1] https://www.bmj.com/content/363/bmj.k5094


Maybe, but there are other metrics than just developer output.

If having a codebase that's well organized makes developers happier, and happier developers stay longer at the company, then that's also a win.


This makes some, definitely not all, developers happier.

Organization for happiness is based on the individual developers personality and means you are putting some engineer’s happiness as a priority over spending that money providing other things to make the developers with a personality type that derives no benefit from strict organization.

In fact, strict organization rules make some developers less happy. Why is their happiness less important when this is well known to have no business benefit?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: