Hacker News new | past | comments | ask | show | jobs | submit login
Measuring software engineering competency (savvyclutch.com)
129 points by ferroman on April 10, 2017 | hide | past | favorite | 95 comments



I do technology due diligence on behalf of investors for a living, so I live and breathe this type of stuff on a weekly basis.

All companies build software differently. Some have automatic deployment, some don't. Some have strong testing procedures, some don't. Just because a company doesn't use a CI, doesn't necessarily make them "worse". It's just an indifference to the indoctrination of the "SV mindset".

The more important answers are not binary yes/no by rather "why aren't using a CI". Common answers are:

- I'm not sure what CI is

- We don't have enough unit tests to justify it

- We're a small team and it doesn't really justify the effort to setup

- We're working on setting up and should be live in the next 3 months

You can tell a lot about engineering competency and leadership from those answers.


Not sure why you got downvoted, you are clearly speaking out of your experience working with different teams and expoisng real answers which many people could say loud every second day.


My personal answer rather is: One is working on a kind of software with additional safety or security requirements, where CI would be a really bad idea.

This does not contradict to the idea that methods that are rather necessary for CI, such as really high test coverage (this is typically even a requirement for such a kind of software), automatic building (can improve productivity a lot) often also make sense in such an environment.


Any advice on getting into this field? Seems like it would be pretty rewarding (mentally) work.


Honestly - I stumbled into it. I really don't have any good advice other than know the right people.


This has 3 big problems that make it a poor substitute for the Joel Test: there's way too many questions, some of the questions don't have a universally accepted "good" answer, and questions have too much ambiguity and wiggle-room.

One nice factor of the Joel Test (not that I saw it being used in reality - but as a mental model anyways), was that you could easily categorize companies into places you want to work or places you don't want to work. A perfect score? You want to work there. More than 2 things they don't do? You don't want to work there. 1 thing? Maybe look into it and see how important it is to you.

With this, your scores could be all over the map. What's more, the questions a company misses on might be ones that aren't that important to you (having a library), or even where a 'no' might be preferable to you (daily stand-up).

Even once you get past all that, many questions aren't easily answered objectively. What's a short iteration? I've worked in places that touted 2 weeks as a remarkably short iteration, and others who bemoaned how long that was.


> A perfect score? You want to work there. More than 2 things they don't do? You don't want to work there. 1 thing? Maybe look into it and see how important it is to you.

That's all well and good if you have the luxury of picking and choosing from multiple offers. Here in the real world (i.e., not in SV), getting a decent offer (if you're not entry level) that's at least equal to your current pay generally takes 6 months - 1 year of hard interviewing. If I demanded a prospective employer scored even 50% on the Joel Test, I'd be perpetually unemployed. Which is probably why employers generally get away with providing sucktastic working environments for software developers.

I find it especially disheartening that the only one of Joel's 12 'tests' that's pretty much a universal 'yes' these days is uses source control.


I'm not from SV, but I am from a tech hub, so I'm sure our experiences are pretty different. Regardless of that, adding granularity and ambiguity to the score doesn't help much in your case either (particularly since things like source control were removed, probably because "it's a given" where that might not be the case outside of startups)


While I haven't seen a shop that completely eschewed source control in a long time, I did once work at an especially dysfunctional company where only one designated person (the QA) was allowed access to the svn repo.

So individual programmers were basically forced to work without any of the benefits of an scm. The designated repo master would place a zip of the latest code in a folder, the programmers would copy it to their home area, and when we wished to 'commit' something, we'd copy our files to a staging area and put in a request with the QA to do the actual commit.

Can't I just commit to a branch and let the QA merge the branches into the trunk later? No.

I need to edit a file, time to: cp file.c file.c.bak1


If all you want is structured save points, git runs locally! See also git-svn and its ilk, which allow you to interface with a different SCM while you keep using a familiar git interface.

Assuming you're entrusted to install software locally, that is.


> Assuming you're entrusted to install software locally, that is.

BWAHAHAHAAA! No.. :(

At my current job, I've been waiting on a software install request (you know, just an IDE, a small thing) since early February. Coding in Notepad++ till then (w00t). Been waiting on a RAM upgrade (stuck at 4GB) for... well, since I started back in October.

Programming as a job sucks. Can I do something else and just keep coding as a hobby (at home, where I have decent tools and don't have to ask permission for every $%@! little thing)? I swear, if I could get paid just as well to bag groceries/serve coffee/etc. and not have to deal with an antagonistic/uberpolitical IT dept run by Vogons, I would do it in a heartbeat.


Can you relocate? I have seen shops like this, but if you can relocate that expands your options by several orders of magnitude.

One example: where I work we have our choice of windows/mac/linux (though if you want to run a linux other than debian, ubuntu or redhat/centos, you're on your own as far as IT is concerned); workstation upgrades are every 3 years. I'm due for a new one in May, so I only have 8GB of ram right now. Every desk has at least two monitors, and every office has a door[1].

This is not in SV, nor is Fog Creek (where Joel wrote the "Joel Test" from).

It costs nearly $200k to employ a software developer after taxes and benefits, so not being willing to shell out a few thousand a year in tools that will give a performance benefit (even if it's just making that person happier in their job), is wasteful.

1: There have been points in the past where cubes were used as a stopgap while we were finding more square-footage, including when I started. The person getting me set-up apologized for putting me in a cube.


Maybe your area really sucks, but i've never had the misfortune to work at jobs even half as bad as the two you've described here. I'm not in a tech hub at all.


I would find a job in another company post haste.


Programming as a job doesn't suck. You were just unlucky enough to find one of the shittiest companies out there, and it sounds like you decided to live in an area without much competition. I can tell you right now, in an area with a halfway decent number of employers competing, you wouldn't have that many problems, and would more easily be able to switch jobs.

I hope your situation improves soon.


> there's way too many questions

How's that too many?


Again, let's look at the context for why the Joel Test exists - it's a way for developers to make a quick evaluation of a companies practices when considering them for employment - hopefully something you can cover in a phone screen. I would be surprised if you could get through these in the typical time people allot for candidates to ask questions in a normal screening call.


A) Actually, we have plenty of ways of measuring software engineering competence.[1][2][3]

B) Weirdly, all of them are heavily process focused and none of them have much interest in the ability to write functioning code, because

C) Software engineering, as a field, is built on the belief that anyone can write software if properly managed.[4]

[1] http://cmmiinstitute.com/

[2] http://www.sei.cmu.edu/certification/

[3] https://www.computer.org/web/education/certifications

[4] and that would make software development much cheaper.


Indeed anyone can write software. However writing good, maintainable, scalable software is a totally different thing. There are so many skills (someone posted a skill matrix recently which I liked).

I taught martial arts for many years, and I can honestly say that I can teach martial arts to anyone. However 98% of those learning will suck at it. They don't have the aptitude, the dedication, or pain tolerance.


Re C): Indeed - and the belief extends to the proposition that to properly manage it, you do not need much understanding of what the detailed design and coding aspects of development actually entail, so long as you know Software Engineering.


That is an extension of the old 'managers need to know how to manage, not how to do things.'


It seems to me that more and more ceremony is being added to software development which distracts from the actual work. This only benefits 2 groups of people: people that don't like the actual work but still want to fulfill a role in the process, and the agile 'industry'.


> ...distracts from the actual work...

YMMV here. I've much more often encountered situations where I had to interview the maintainer of a repo (if there even was an official one) to find out how to contribute, whether they were even interested in contributions, and how to make sure my change didn't break anything.

A lot of these tools, processes, and special words are as much about good passive communication as anything else. That being said, tooling that doesn't fit development use cases is often worse than no tooling (presuming devs can make their own productivity scripts as needed).


Some of that is needed, though. If you're going to just shut one or two people in a room for 9 months, then you probably don't need it. But today usually you're going to have bigger teams, and the business is going to need more visibility into the project, while also giving developers a degree of autonomy and ownership of it.

When the business is willing to let it work, Agile can be kinda nice. But if the business doesn't buy in, if they keep interrupting, changing priorities and tasks mid-sprint, then it's just going to make everyone miserable.


> Some of that is needed, though.

The most important thing you need to make sure is that developers talk with end users. That is one of the biggest point of Agile.

Now, if you look again at the article, you'll notice this one interaction is completely missing. Not only that, but Scrum de-emphasize it too by creating middle-men, and most formal "Agile" methodologies don't even think about it.


I'd settle for a place that stuck to the original Joel Test.

After all most of the CICD stuff in this list is covered by his original 1-step builds rule. If you have that then scripting cicd stuff is trivial. Similarly his test covers testing.

Everything else in here is just disguised Scrum.


True, but not much team follow these, in practice. We just want to make it more specific and modern.


> True, but not much team follow these

Isn't this a very good indicator that the Joel test is still sufficient?

Only once almost all companies pass the Joel test with 12/12 points [1], a modernization is needed to distinguish further between multiple offerings.

[1] or 11/12 points, see https://news.ycombinator.com/item?id=14078069

EDIT: Fix typo.


It is relevant, but we try to make it more specific.


Sorry, typo. I meant "sufficient", not "relevant".


Is that a reflection on the test, or the teams?


Teams, of course. The goal was to measure it, and we tried to be more specific.


Conspicuous by its absence is:

Does your software work?

These criteria are all rituals and processes, rather than the end result.


1 million times this. And perhaps: does using the software make the user happy? Rituals and processes keep people in a job and sustain the agile industry, which is why they are being pushed everywhere.


They're signals that indicate a well-functioning development team, that can be researched with little effort and answered objectively.

"Does your software work?" is almost impossible to answer objectively, and doesn't help you determine if the software is going to work 2 years from now (which a good deal of development best practices work to achieve). You might as well replace the test with "Is this company awesome?"


In general, the way to objectively determine if software works is to give it to those pesky end users, have them use it, and tell you if it works or not, and how well it works. Lacking end users, have testers and QA folks who play the role of end users evaluate it.

In some areas of software development, such as heavy duty algorithmic/mathematical programs such as encryption, video compression, computer graphics, there are pretty rigorous objective ways to measure whether the program works and how well, without human testers or QA people. Usually best to do both even in these cases, just in case there are some subtle issues that the performance metrics don't capture.

On the other hand, predicting the future is noted for being rather hard. Predicting correctly whether a program will need to be changed in two years, in what way, and whether the program can in fact be changed easily all two years in the future is speculation, a matter of opinion, rarely objective at all.


You're right that there are classes of application that can be said to "work" objectively.

For the majority of business facing SaaS applications (to name an example), "working" is an elusive target. If we gave it to those pesky end users, and there's more than 5 of them, I guarantee you'd hear multiple answers to how well it works certainly, and even if it works.

I'm not saying that looking at how well your product works for people isn't a noble endeavour or anything. For the purpose of what this is supposed to be - an easily obtainable, objective measure of what it's like to work for a company, it's horrible.


Or perhaps they're CYA techniques for when your project goes over budget, over time, and underdelivers.

A well functioning development team is wonderful...for developers.


Several of the points are effectively asking "Do you know that your software work right now?". Without CI and automated tests your knowledge of whether your software works is inherently limited in scope to what you can manually test in a reasonable amount of time and is out-of-date as soon as someone adds a new commit to master.


On the other hand, there are plenty of shops with CI and test suites that simply fail. Like, everyone knows lots of the tests are red from some legacy code or workaround, and they just stay that way at great length.

That's sort of my concern with these indirect questions about tools - it's very common to have them but not have them doing their job.


In my experience, good programmers can produce working software with or without any processes. Poor programmers cannot produce working software with even the most perfect, appropriate processes.

All of those practices can help, if used appropriately. But they're not going to magically make everything better.


Yep. There was good code before CI and source control, and there's bad code even where those things are in use. Certainly they're tools worth having, but checklisting tools doesn't actually say much about software quality.


The answer is universally the same across all software ever: no. The space shuttle probably came the closest, and they still had to fix bugs - or in other words there were still edge cases where the software did not work. You can step back from perfection, but then do you draw a line: even the most buggy software often worked a little in some situation.

If you have all of the list in software that doesn't work well I can spend the next 5 years fixing bugs and eventually get to useful working software - it won't be the most fun job but it won't be a job I hate. Everything you are missing from the list makes it that much more likely that the missing thing will frustrate me until I just want to quit.


You realize that the shuttle software development process[1] would answer 'no' to essentially every question there?

[1] https://www.nap.edu/read/2222/chapter/5


If, in a complex domain, some metric gives the same result across the entire space, then there is a decent chance that the metric is not a very useful one.

Your stringent definition of "working software" seems to fall in this category.


Unfortunately, it's much more complex than that. Does your software work? Does your software work right? How quick you can add this to it? How to avoid bugs when you introduce complex change? How to share knowledge with new people in project?


A little additional context: Joel's "test" was just a quick-and-dirty way for a job candidate to assess whether or not a company had competent software engineering practices. Assessing your own team's practices could (and should) go far more in depth, since all the messy details are right there. The spirit of this article is good, but I wonder if we can formulate a good test that applies universally to all teams and carries enough detail to help that team improve.


A lot of companies get the basic source control, builds, bug tracking and writing code during interviews parts right, but tend to skimp on these aspects of the Joel test:

- Do you fix bugs before writing new code?

- Do programmers have quiet working conditions?

- Do you use the best tools money can buy?


> - Do programmers have quiet working conditions?

This one is the most important one for me, and the absolutely hardest to find. I believe we (as programmers) let ourselves get overwheled with extreme programming, daily standups, burndown charts and other mostly meaningless stuff. We forgot the basics; peace and quiet. Everything else is just extra.


While I agree with most questions, I always found number 9 to be a bit strange:

> 9. Do you use the best tools money can buy?

I hope I don't do too much injustice on Joel, as I always loved to read his writings back when his "Joel on Software" blog was active.

However, this item on the list always sounded to me as an attempt to promote their FogBugz tool, not as an objective advice.

Recognizing there are many excellent Free Software tools, specially in the software development area, I'd rephrase it is as:

> 9a. Do you use the best tools available?

or maybe:

> 9b. Do you invest in your tools?

which means, depending on the exact tool, one or more of:

- buying a proprietary tool

- using a Free Software tool, and donating money to the project

- using a Free Software tool, and providing bug fixes and/or new features

- having one or more team members dedicated to improve the tooling and infrastructure


> 9. Do you use the best tools money can buy?

I always took this as a criticism of companies where you've got developers who earn multiple thousands per month pecking away at old PC's and squinting at 15" CRT screens. Waiting for 5-10 mins for the OS to startup and > 5 mins to build a project.


Frequently the objectively best tool is Free or open source software (which doesn't mean it's priced at $0, although often it will be). But many times it's not, and that's when companies can become extremely penny wise and pound foolish.

Hiring someone to exclusively babysit a Jenkins instance is incredibly expensive. Paying for Travis CI/Codeship/Gitlab CI is really cheap in comparison. Having developers fill out purchasing orders and waiting for software or hardware is very expensive.

I like to call it the "IntelliJ test", can I requisition IntelliJ ($499) and have it the same day (week? month?) or is the company going to flinch, hem and haw at the absolutely inconsequential price of the software in comparison to the expensive developer time they're paying for.


It's not always cut and dry. I was the "Jenkins babysitter" for a lot of years.

At scale I don't think most off-the-shelf CI/CD tools hold up. You will need dedicated people to take care of them.

Of course, if all you have is 100x plain software projects which don't depend on one another and there's no sort of other interaction between them, by all means, go for SaaS CI/CD.

If there's any kind of orchestration needed... it doesn't hurt to hire a professional to do it than force 40+ developers do it piecemeal between their other tasks, which will often have a higher priority due to management demands.

To rephrase your statement, I think you should get the best tools that are realistically affordable for your process. On top of that you should also get the best supporting cast for your process since often tools on their own don't cut it.


I have only ever been able to use IntelliJ at home (where I buy my own copy) on my own projects. Everywhere else it's Netbeans (meh) or Eclipse (blech).

I remember I was once told to integrate an ancient DSP library into a new development project (despite more modern alternatives being available that would have met the actual requirements just fine). This library looked like it was originally written on VMS (little hints of VAX-ness here and there), later ported to Solaris, and finally ported to Linux. It was a mix of C, C++, and Fortran, and it required the Intel C and Fortran compilers in order to build (no, gcc/gfort wouldn't work) along with some of the icc runtime libraries in order to actually run.

My employer at the time absolutely refused to purchase the Intel Compiler. But expected me to get the job done. I can't remember exactly what I did to make it work, but I do remember it was a giant kludge. That entire project was an underfunded/understaffed/mismanaged nightmare.


I'm a bit here and there on this. Had an old perpetual IntelliJ license and it still worked perfectly. When they switched their model to yearly subscriptions.. why upgrade? I've been using 2016.x and now 2017.x and I still don't notice any difference. Maybe it starts up a little faster? My work laptop runs weeks. It surely doesn't index faster...

But that's not the point, I know :) And I still love it, it's worth the money.


I always took "the best money can buy" to be an idiom for "the best available". I don't think he was advocating for paid tools literally whenever possible.


Joel's criteria are intended to be efficient and diagnostic of common problems. I think his choice here is based on seeing teams use ineffective tools because effective tools are "too expensive". Whereas being made to use expensive tools and being forbidden from using more effective free/cheap tools is more rare.

One thing that Joel's list isn't, it isn't how-to-develop-software, or how to manage software projects, etcetera. There are many books about that. Joel's list is about recognizing ineffective leadership that will waste your time, which is finite, and limit your career and your earning power. And it is a checklist that you should be able to move through during a single on-site interview, not something that requires a four hour conversation and requires you to do an in-depth analysis of their decisions.


You're thinking no OSS because you can't buy it like purchased software.

I'm thinking desktops/laptops with lots of RAM and SSD, fast internet, a proper chair, standing desks, a 4k monitor, ...


I think his blog is still somewhat active https://www.joelonsoftware.com/archives/


Indeed, during the last three years there were 4 article. However, 2 of them were less insightful and more marketing pitches.

Anyway, I still have his blog in my RSS reader, so I won't miss if he writes a new article. Even his more marketing-like articles are quite entertaining, yet accurate, so are worth reading even if its just for observing and learning from his writing style.


This test completely misses what it means to be a senior (effective) engineer. The real difference between a mid-level engineer and a senior engineer is that the mid-level engineer will mechanically apply the same strategies to all projects without thinking - All the points mentioned in this article; CI, one-step deployment, daily status check-in meetings, etc, etc... are in fact not necessary for ALL projects.

The quote "Those who only have a hammer tend to see every problem as a nail" is a good summary of the junior/mid-level mindset.

I don't know the author, but based on the rigidity of the article, I would guess that they've only worked for big companies. I would argue that a most of these rules are only effective in the context of a very large company; in literally every other context, many of these rules are inefficient.

Big companies are all about risk mitigation; they are willing to sacrifice speed and agility in exchange for stability, certainty and visibility but this is actually a luxury that only big companies can afford and should not be taken as a rule of thumb.


I may be in the minority, but I find the 50% unit test coverage not useful, and sometimes harmful. Caveat emptor - depends a lot on the project, and how often you are changing the code and/or the complexity of said code.


Join the club, then. My view is there is a borderline unhealthy obsession with unit testing to the point that the actual product code takes a back seat (in quality and functionality) just so that some usually arbitrary measure of "testable" applies to it.


Same here. Coverage is no indication of quality, and lack of coverage isn't necessarily an indicator of lacking quality.


> lack of coverage isn't necessarily an indicator of lacking quality. "lack of coverage" mostly means "untested code".

My experience is that it's very hard to keep untested code to a high quality level. Any modification that isn't directly justified by a customer feature or a bug fix is frowned upon, because it's hard to tell if it breaks anything ; which means you pile up new features, but you can never modify their design so they fit better together.

When the philosophy is "now it works, let's never touch this module again!", code quality goes down to the toilet.


In addition to that, how did he settle it on unit tests instead of integration tests?


What do you mean? Is it's too low or too high?


It is less a question of whether the percentage is correct than whether the tests are useful. I've seen plenty of useless tests (testing getters and setters in Java) that assert nothing related to the codes functionality but exist solely to boost coverage. Which is why asserting a strict coverage percentage is dangerous.

Better to just do real TDD in the first place.


This. Slavish fetishization of a specific code coverage target is indicative of an underlying problem, IMO and that problem is far greater than one having relatively low code coverage.

It is far better IMO to go in with an understanding of where your potential hot spots are than simply adding a test to everything. Sure, in an ideal world we'd have 100% coverage of everything but this field is about tradeoffs and sometimes writing tests simply isn't worth the time it takes to have written them in the long run.


In my experience it's too high. I see a lot of unit test code that doesn't do anything except add complexity. But again, I guess this will depend on the nature of the project


A Joel List is a quick-and-dirty list you can use to assess the competence of an organization. I don't think this list achieves that goal.

#1 and #2 (CICD) are fair additions to Joel's list, but I'd argue are already encapsulated by "do you make daily builds?". In most shops, if you make daily builds, then you CICD.

#3 = Joel's #4

#4,#5,#8,#10,#11,#12,#14,#15 are all "Do you SCRUM/TDD?". If that's the kind of place you're looking for, great. But there are many competent code-oriented organizations that do not SCRUM. So these don't really belong on a Joel List. (Also, "We don’t know the better way to make sure that code does what it’s supposed to, then to have another code [author means unit tests] that runs it and check results" just isn't true. We know better ways, and sometimes they're even relevant to a list like this. "Do you use any form of static or dynamic analysis (e.g., types, valgrind, quick-check style tools, linters, etc.)" is on my personal "Joel Test".)

That leaves "do you have a library?". IMO work-place libraries are close to useless as signals (everyone has one), and rarely useful in practice (unless you're curious how PHP code was written in 2003 or really want to brush up on complexity theory).

As an aside, it's kind of depressing to me that we still make these lists. Back in the 90's, software engineering was still a relatively young craft with relatively few experts. Joel was part of a surprisingly small group of people who: 1) had a career's worth of experience developing software for micro-computers in high level languages; and 2) had deep and successful experiences across several organization roles in different types of organizations (coder, manager at MSFT, CEO at Fog Creek). The existence of managers who were in charge of software engineers but had no engineering experience wasn't surprising at all, given the youth of the field. Hence the Joel Test.

The world is a very different place today. There are a lot of people with this level of experience. Joel Tests aren't ubiquitous in other engineering domains, and hopefully they'll eventually die out in software as well. Not because the items on them aren't important, but because experienced Engineers manage Engineers.


You can generalize it, but that was a point to make the list more specific. And yes, world is a very different place today. The number of SW engineers doubles every year, so, at least half for them are new. We are engineers and we should try to measure or competency, аnd should try to systematize the things that we use. Of course, this list is not something absolutely universal, but we should at least try think about standards we that we want to meet.


>Do you contribute to Open Source?

Of all the Merits, this is one I disagree with. It's merit is primarily "joining like minded folks" (i.e. cult) than any inherent merit in and of itself.

How many great developers do you know who do not have this "merit"?


I think you are misreading this question. It is a merit of the organization and I think gets at the issue of lots of organizations using OSS, but very few of them actually contributing back. What this requires of the organization is to acknowledge that they need to contribute if they want to benefit long term and thus they need to allow and encourage their engineers to participate.

If employers were encouraging or at least allowed it you'd see a lot more great developers have this merit, but for most medium to large organizations it's a one way street with OSS and prohibit their engineers from open sourcing projects or contributing to projects the organization depends on.


I know way more great developers who contribute to open source than great developers that don't.


Of course you do, they publicize themselves by contributing to open source...


The problem with this test and the Joel test is an employer can check all of the boxes (essentially do you follow modern software practices that were revolutionary 20 years ago, and are you "Agile") and it can still result in a toxic or less than optimal environment.

I did like the questions around OSS and sharing expertise. I'd like to see more questions that address recruitment anti patterns (diversity, agism, disclosing previous salary, etc) and tech organization anti patterns (an actual career path on par with management, non transparent equity grants, etc)

Like, what would the questions be if even, say, Google didn't look so good if it answered them.


Such tests are not very informative when they are disconnected from the type of companies/industries one is looking for. This is a criticism I have of the Joel test, too, since at the time he wrote it it seemed to me a good way to evaluate prospective software houses. After all, he worked in one. Yet I dare say most of us will probably spend at least part of our careers working in in-house IT departments. Different rules.

It is not unlikely to find companies that fulfill all requirements, although they will likely know how attractive their working environments are and will filter candidates accordingly. An interview I had with such a small-sized software house two months ago confirmed this. I gave them the Joel test, which they had never heard of before, and they scored perfect. Dedicated testers, usability testing, quiet working environments (like a library, the team lead said; no need for headphones). Predictably, they were extremely picky as to who they let in.

The ones much less likely to get good test scores? 1) Government IT, by and large 2) IT for any non-tech company less than a certain size. 3) Non-tech corporations (and even some tech ones). One notable one I was aware of used excel for bug tracking, was full of red tape and their main technical test was a 20 question multiple choice.


I'm not quite so sure what your point is. The examples you gave of places that don't do so well on the Joel test still sound like places most of us would not want to work.


Do you have automated end-to-end tests in DSL?

Seriously? What is considered a DSL?


And what's more, why does it matter? I assume they mean something like Cucumber, which, while I'm OK with, doesn't really add any benefit as well organized tests written in the language your code is in.


Because it allow to keep the domain logic somewhere. In these days people don't do documentation, or documentation is always outdated. Having high-level tests in DSL will do more than just tests - it give you information how your application behave from the user perspective, in more general sense. So you will have less issues when new people join to the project, or when project owner changes. And it force you to focus goal during feature implementation. From my experience, new features often have behavior that are not oblivious, and some times conflict with other application features logic. These tests allow to see these conflicts before implementation.


I disagree, but I get where you're coming from - which is exactly what the problem with this list is. The Joel Test was great because everything on it was universally accepted as something that every developer would want to happen where they worked. With this, different developers are going to have different opinions of many things on the list, making the "score" lose it's usefulness, since now every time a company has less than a perfect score, I need to figure out why (is it because they don't have a library? I don't care so much about that. Is it because they don't have CI? I care a lot about that.)


i'm not a fan of cucumber and you can achieve the same results by just reusing code. a big problem i have is it ends up introducing pointless indirection. defining steps in different files from the feature being tested even if they are only used in that single feature file is unnecessary abstraction. having to grep for step syntax in a project to try and find the code that is implementing a step is insane.


Domain-specific language of course. I prefer Gherkin.


Do you follow the shibboleths of my religious sect?


I think I get -9 for one current project, though I'm not sure because I don't understand several of the weird ones.

Then again, the client likes that project, because their customers also like it. It solves a problem for them that no-one had solved in a similar way before. I don't think we've had a single major bug reported against that part of the system by any customer in over five years, and typically that includes a multi-month lab evaluation by each customer before deployment.

So, does the development process on that project suck or not? :-)


If you're looking for a set of basic criteria for well-run open source software projects, please check out the CII Best Practices Badge: https://bestpractices.coreinfrastructure.org/

Full disclosure: I lead the project.

Constructive comments very welcome!


Never even heard of half the terms in "obligations". Somebody should tell my employer that I'm not competent.


Eventually some one will :)


Probably the new intern with the 'information technology' major.


If you really want to improve the Joel Test, fine. The suggestions are even right. But please, KISS. The original test had 12 items, this one 20. This should be the upper limit, more or less, for any improved version. Otherwise it is probably too long or detailed to be of practical use.


It's also way too specific and biased. Rules like "do you have end-to-end integration tests?" aren't always obligations for all teams, whether that's because your team is doing embedded work or because you're doing something better (like consumer-driven contracts). Other rules like "do you have a primary communication channel?" are even at times counter-productive (in particular, when dealing with a variety of customers who have their own preferred methods of communication, which you must accommodate). Daily status meetings are sometimes unnecessary, if you have a small enough team sitting in its own room or more tightly involve team members in planning and review practices, and indeed daily status meetings often clash with more important practices like flexible scheduling and not interrupting flow.


Thanks for feedback. We decide that more detailed test - more precise measure, but we still try to keep the list as short as possible.


It would be great to have some compiled data on various companies/teams and the obligations + merits that they do or do not meet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: