Can we please talk about how terrible BitBucket is?
No syntax highlighting on diffs, horrible defaults (closing development branch when you merge into master???), not being able to make a PR after you make one commit to a branch without refreshing and loosing your message, inconsistent code formatting that is just horribly broken in general, weekly downtime that's not reflected on their status page, having to manually press a button to see updated diffs after updating a branch, no support for signed commits, API support lacking in the weirdest places, random failings in commit webhooks, etc etc.
I've also tried the 10 user licenses for both Jira and Bitbucket servers. I absolutely loathe Jira. I respect its vast featureset, but user experience as a programmer has felt absolutely abysmal. The React redesign only confused me more. To find the Kanban boards, you had to hit the search button! I hope Jira has improved since last time I used it, but I have to say I'd probably prefer just about anything else at this point.
As for Bitbucket.... It works, but I can't think of a whole lot it does that GitLab doesn't do better.
Yeah, Jira is horrible. Between the workflows, search filters (create one, apply it to a board, but then you can't edit it?) and everything else, it's really clunky to use.
I've done test runs with other tracking software, and I can't really find anything better. Every tool sucks in its own way. Do you have a recommendation on something that you've found that's better than Jira?
Not really. This seems like a largely unsolved problem.
At my previous startup job we evaluated Jetbrains YouTrack and GitLab issues. Both were fairly competent looking, both had nicer interfaces, but nothing has the same feature set or ecosystem as Jira. GitLab now has multi-project boards and help desk support, so it's actually getting to be pretty useful in it's own right for issue tracking.
I like Phabricator, it doesn't offer as much combustibility as Jira but the options it does have are good for software development workflows. Plus it is open source which is always a bonus
Thx for the advice this looks like a nice tool. If there was enough "drive" of some dev's and a thriving community this could become a standalone alternative for Jira one day.
Use Youtrack, it is from Jetbrains. Very flexible (you can write forkflows in a programming language), much faster, great keyboard shortcuts and much better interface.
For tracking my personal task list I'm quite liking emacs org mode so far but I'm still getting the hang of it. It's not really a solution for teams though.
Jira is still horrible, I've only been using it a few months so I don't know if it's improved since last time you used it but if it has then it hasn't appreciably moved the dial from abysmal. I die a little inside every time I have to interact with it.
- Throw Bitbucket away, I don't even know why they wrote it in the first place
- Use GitLab, even it's CE edition is a lot better than everything Atlassian does with Bitbucket and Bamboo combined
> closing development branch when you merge into master???
Why's that a horrible default? It doesn't happen to match the workflow you use? It matches the one I use. So I can understand it's not ideal for you but they can't suit everyone with a binary default so what makes it so 'horrible'?
Remember Atlassian in their infinite wisdom renamed the on-premises product formerly known as Stash to Bitbucket despite being a completely different code base to Bitbucket.com. I think they are talking about Bitbucket.com.
> It doesn't happen to match the workflow you use? It matches the one I use.
You're in the minority here. For the majority of software projects branches like `develop` are usually considered "long living" and are not "merged in" when arriving at `master`. That's why we have things like Git Flow [1].
Also an action that defaults to destroying something is user-hostile, so that's why it's a big deal and thus why GP wrote their comment.
I use the feature branch Git workflow. How do you know the majority of people here use Git Flow rather than feature branches? Have you just guessed that?
And anyway Git Flow still has feature branches that you close on merging!
Bitbucket provides free private repositories, which I really enjoy for personal projects. I never log into the site though, just a git server I don't have to manage myself.
Gitlab. Same thing, massively better performance, feature set, etc. And you have the option of taking it in-house should a project scale to the point where that's useful.
I migrated everything off BitBucket to Gitlab. I'm forced to keep a Github account because it's expected. As soon as enough people move away from Github to allow me to drop it, I'll migrate those too.
Bitbucket has been offering me free private repos for years, and it's basically never been down. It integrates perfectly in everything I use in my workflow, while sheltering itself from the hype train and avoiding providing useless features just because they are cool. Hands down one of the best products out there. My 2 cents!
Oh, I beg to disagree. The last 8 months have been much better, but in 2015-2017 it was like 3-5x per year that I had to tell my boss that we can't deploy because Bitbucket wasn't triggering the CI server.
edit: come to think of it, I switched timezones from California to Asia. I haven't run into as many problems because no one is awake to break something :).
We keep an ifttt webhook into this page, very helpful to resolve the WTF chorus when something is busted. Props to Atlassian for transparency: please keep this going.
We use confluence, bb, and jira. The good news is that triad is very nicely integrated: you can easily crosslink issues between them. There is definitely room for improvement on uptimes though.
As I said in my original comment, the performance issues and downtimes we experience are just not present on the status page.
As one data point, several weeks ago it would take BitBucket over 70 seconds to respond to a `git ls-remote` with a repo containing about 200 branches. Usually this takes ~10 seconds or so, but it caused all kinds of headaches with Jenkins. On top of this pushes where incredibly slow.
They definitely need more alerts pertaining to slow API responses on the git interfaces. I still give them props for usually fessing up to outages. Some places (cough aws) will be out for hours before they admit to it -- not gaming any numbers, nope never.
Because Bitbucket Server (on prem) has been fantastic for us and is continuously improved. From what I know, it's a different codebase though (was once known as Stash)
Not allowing people to talk about how you application performs is not compatible with GitLabs value of transparency. I've added a commitment to always allowing people to do this to our stewardship promises https://gitlab.com/gitlab-com/www-gitlab-com/commit/da81150e...
This is a bit offtopic: (OP here) the reason why I was even reading Atlassian's terms is to understand what kind of language is used in enterprise self-hosted software because I need one for my own product. I really like GitLab's open core model (as long as the core is still perfectly usable), and I was wondering if I could ask you some questions about it as I'm looking to adopt something very similar? Could I contact you over email about this?
Atlassian has always forbidden to talk about the performance of their products in their ToS and in their previous EULA. We all know why, but we don’t talk about it.
One of my customers even raised the issue with them, saying it would cause harm to the company if it came to be known, but they dodgeball’ed it with legal.
Atlassian allways wanted to compete with those but will never succeed.
Just like they had to admit that HipChat and Stride are worse than Slack and they'd never get to the point that someone would want to use their chat tools, they will have to admit that Bitbucket + Bamboo will never be able to compete with GitLab or GitHub.
In my opinion the only really good things are Jira and Confluence.
There is no standalone tool that does the Jira stuff as good as they do.
There has never been an enterprise wiki which was so usable and consistent like confluence. Even if I have many paintpoints with it, I don't know something better for a wiki solution that has to be used by everyone from management to development (dev's cry the most about it).
It seems like they're balancing features with speed. In order to get a bunch of extra features, every page load needs to be 200mb of extra crap that 99% of users won't use 99% of their page views.
Oh and hamburger menus are hiding the key features you DO use every page view.
Why doesn't JIRA offer a native desktop app? Everyone using it is on PC, Mac or Linux right? And then a cut-down iOS (iPhone and iPad) app covers the rest?
A few colleagues use a Visual Studio extension, although that seems to be mostly handy for time-tracking and limited interaction with issues. It's also stuck in a UI that goes back to 2008, so perhaps rather fitting ;)
The new design for Jira really sucks, has made everything slower, and crappier to use. Page load times of 10+ seconds are common. And no clear usability gains.
I was a huge fan, but any more I just want to use GitHub.
Please talk to your admins! In my experience, Most Jira slowdown issues can be traced back to permission checks. (And you can enable tracing in the system settings to verify this)
If only your team needs to access it, set the instance to open permissions (I.e. most rights are set to “Everyone”) and then control access using the network or with a proxy in front. This took the instance I run at $dayjob from infuriatingly slow to no slower than any other web application.
This is knowledge that could come in handy one day, so thanks.
But fine-grained permissions are a fairly regularly cited reason for using Atlassian’s stuff over other (simpler...) offerings. This sounds a lot like “don’t try the fancy stuff because it’s unusably slow”.
I'd be willing to bet part of that is too many fine-grained permissions. It's a bit of a footgun.
Basically, anytime you hit a ticket's page, Jira has to scan your account's group memberships and compare that against a litany of permissions to determine whether you can even see the ticket and what actions you can take with each of its fields. This is done to avoid showing you UI elements for things you can't do. If a given permission is set to "everyone", the check simply isn't done (kinda the equivalent of replacing a call to the user directory with a "return true;")
I'm not talking entirely about reducing security here. I mean that if everyone in your team is a member of a certain security group, and only your team touches your Jira, set that permission to "everyone" rather than using that security group - the lookup is completely unnecessary in this case.
Basically, you want as few permissions as possible to ensure the level of security you actually need. Often this isn't done, people go a bit crazy with permissions schemes trying to segregate this and that.
This is also very similar to the clause that allowed Larry Ellison to allegedly try to have a professor fired for benchmarking Oracle: https://news.ycombinator.com/item?id=15886333
I was just going through theirs (and other) terms and conditions to understand what kind of legalese goes into these and I was appalled at this. While these terms aren't effective until after 01-Nov-2018, I tried searching online to see if there was any discussion about how stupid this is, and this unanswered forum post came up.
I have no idea how this is still considered acceptable.
I just looked at the text of the law. As the name suggests, it only applies when an individual is a party to the contract. If a company is the licensee, it wouldn't apply.
Plenty of proprietary software has these restrictions, Datomic, MS SQL Server, etc. I suppose you could make a case that most published benchmarks will be flawed or inaccurate, but it's still wrong to do.
I've submitted many benchmarks over the years for new jira and confluence UI "upgrades". I've received responses to the tune of, "our developers don't see that behavior issue on Windows" even though the test systems I used were Linux based and clearly stated as such. Speaking frankly, the new jira ui is fucking shit and makes my job slower due to the fact that pertinent information is obscured and only clearly displayed when I click "use old interface". This means that I must first wait for the whole page to render, click a link, and wait for the entire page to redirect to the old UI and render again. Multiply this for every single ticket I have to open and you can see the minutes tick off the clock.
I feel like there is a subtle pattern of attitudes and behavior that I pick up on from Atlassian. It’s almost as though they don’t get that they need to keep working to innovate and please their customers to stay relevant. Look at what happened to Hipchat, and what I’d argue will eventually happen to JIRA.
It’s almost as though they don’t get that they need to keep working to innovate and please their customers to stay relevant.
Like all enterprise software vendors their customers are the ones who buy it, not the ones who use it. In almost all cases the person signing the cheques will never experience any of the issues themselves.
This happened like 2 years ago at least, every update to Jira makes it slower and less usable, I have given them this feedback many times, and I suspect I am not alone, and they simply do not care. Until they see subscriptions drop, they will do nothing, they are like Rational Software or CA when they hit the big time and started buying up all their competition.
Slack pretty much killed hipchat and drove directly into enterprises - which should have been atlassian sales. Once you have your tentacles in enterprise (and slack are almost there), it’s very easy to sell huge contracts of pretty average software. JIRA is great, but it’s not that complex, and it would not take much to start chipping it away
JIRA is quite complex, but it suffers from a Swiss army knife complex, and at some point you end up "programming in JIRA" instead of using a much simpler purpose-built tool, or even building one.
If your processes are complex, you're likely need flexible tool which allows arbitrary workflows. Of course you might argue that complex processes should bend to match tools, but not everyone who pays for software will agree, so Jira definitely has its market as a flexible tool even if it means that there should be dedicated Jira developers.
I heard from a friend at a fortune 100 company you would all know, that they tried piloting hipchat internally that hipchat simply did not scale well. Their servers choked under the load so frequently they got out of the contract.
Not complex? We have to hire entire teams of people just to handle that confangled mess of options and wierd defaults and strange behavior to not slow down the developers and everyone else.
Jira is complex but no one needs that complexity. Jira is built to sell to executives, not for anyone to actually use. Executives need to know that they'll be tracking their employees time down to the second with Jira.
But then they're never going to look at the UI for Jira either because they're too busy for that.
I'm not gonna defend Jira. I don't like it. But this is the same thing people like to say about Office. It turns out that the 80/20 rule (80% of users only use 20% of features) never guarantees that even 50% use the _same_ 20%. You need to implement a lot more of Jira's complexity than you might think to get a significant part of Jira's market share.
(Slack is another good example. Slack is "just" an IRC client with better emoji/GIF support. But I'll be damned if Microsoft Teams doesn't _really suck_, even with the benefit of knowing how Slack does everything. Good software is hard and takes work.)
> I'm not gonna defend Jira. I don't like it. But this is the same thing people like to say about Office. It turns out that the 80/20 rule (80% of users only use 20% of features) never guarantees that even 50% use the _same_ 20%. You need to implement a lot more of Jira's complexity than you might think to get a significant part of Jira's market share.
Trello was the counterexample: it implemented less than 20% of Jira's features and was all the more useful for it. (thus Atlassian bought it and have started ruining it with bloat).
Jira + Confluence Clouds Performance Problems are the Number One reason for our company to actively seek for ways to get the Atlassian needle out of our arms. It's really tough once you've invested the time to get Jira just the way you org works, but the Issues are just piling up and the complaints grow louder each month.
Shameless plug: If you’re looking for a Confluence replacement would love for you to try my startup’s product, https://tettra.co. We have Github and Slack integrations and are working on Jira soon too.
"We don't plan to offer a hosted version of Tettra in the immediate future. We believe that secure web services are the new standard for business applications and take security seriously."
Seriously, NO.
It has to work without an internet connection at all. That includes installation, updates, and help pages. People who care about security have air-gapped networks. To update software, we burn it to a DVD and then walk it over to the secure network.
I have a hard time understanding how anybody would tolerate their business secrets being on your servers. That is really weird to me.
Uh, this is so far outside the norm of every company I’ve ever worked for/heard of that I think they’re safe not worrying about you in their business plan. Unless you’re working with classified data, that is — but those cases have tons of other requirements beyond working air gapped.
Now, there are other reasons you would want on-prem other than air gapped networks, but that’s not the discussion we’re having.
Edit: oh, this guy is a troll. Just read his other comments. Sigh.
If it is "so far outside the norm", then that could be why people keep getting hacked and facing clone companies popping up in other countries. Where I work, with hundreds of people, we aren't making this mistake.
As for trolling, "Assume good faith." is in the site guidelines:
https://news.ycombinator.com/newsguidelines.html FYI, I'm not trolling, and any appearance of it is due to cultural differences.
What sorts of issues? For me, performance is the issue, I've never found JIRA buggy or inadequate, and I'm generally able to find a given ticket and interact with it in the expected way.
I'm not sure how you can have used the UI and hold this option. I feel like I've been greeted by the same basic bugs time and time again, like clicking on an issue ans it not opening until the second click, or opening the previously viewed issue.
Jira performance takes a fair amount of experience to nail down. If anyone needs help, that's what my companys (https://atlasauthority.com) core focus is. Spent 4 years at Atlassian in support, and now we work with a number of the largest deployments in the world to help make things fast(er). It won't ever get to the sub second standard we like to see from modern SaaS apps, but it also doesn't need to be 12s page load times like we often see in client environments.
I have used JIRA for 5 years and came to it begrudgingly when the project was complex enough. It handled the complexity, details and customizations of the project well enough though.
It's funny reporting benchmarking is being banned. The guise of web apps being too complex to be reasonably communicated is laughable.
There is a 15 year old bug open to reduce the flood of emails Jira can create from updates. JRASERVER-1369 [1] could be it's own community.
With minimal java/jvm tuning experience, it seems likely the oncloud product is aggressively cached and resource throttled. Once a page or filter has loaded, it generally loads quicker.
The speed of an on premise install of JIRA compared to the cloud can be staggeringly faster.
One solution might be for some clever person to first crawl an entire Jira or Confluence site, and then continually ping all the pages to keep the system performing better.
Atlassian employee here. JIRA's backend is not Node based. It's mostly Java. We do use Node for other products, including Trello, and it's more than fast enough.
> everyone is using nodejs and your language is now banned...
> Add internal politics... hire as many people as possible and terminated anyone raising valid concerns...
> monolith service now becomes highly distributed...
As some comments mentioned before, there are perfectly good software written in NodeJS (trello) which are blazingly fast. Similarly, there are enough good, well performant multi-tenanted applications on AWS. Anything that has been used for a long time (linux, java, protocols, etc) is always going to carry tech debt and there is nothing wrong in that.
While there may be a grain of truth in some corner cases of what is written, it overall comes across as an emotionally negative let off, and in some places borderlining on indirect propaganda against generic stuff (sorry).
In this day and age, languages, frameworks, and libraries that deal poorly with parallelism deserve criticism for it.
Yes, competent engineers can work around these problem given time (e.g. pivoting from thread-level to process-level parallelism), so it wouldn't be fair to entirely blame nodejs for the problems, but that wasn't the vibe I got from GP.
Finally some sort of explanation. Thee feature set is great for complex use cases but the cloud version is unusably slow. Worse the support team seems trained for pretend like it's not a problem. The last web app I remember being as slow as Jira is Friendster.
The hyper distributed micro services are definitely one of the big reasons for unnecessary complexity and latency problems. It was surprising for me to see that the JVM based services were load tested for 4-10 minutes to call the test successful.
> nodejs because everyone is using nodejs and your language is now banned it's not surprising there's noticeable performance differences, it was certainly predicated early on but they where to caught up trying to be google and using the popular language to hire as many people as possible
I wonder how typical is this nowadays in IT companies and departments all around the world.
I'm aware of a nodejs script that is basically packed up with the node runtime and run on Windows as a "native application," which interacts with the filesystem, windows registry, and more. The packer is years old and nobody knows how it works.
It is everything you expect of such a program. From both a user and developer perspective.
It is deployed on very expensive, very important equipment in pharma labs and hospitals.
This is less uncommon than people like to admit. I've seen, and in some cased directly involved with, code written by an inexperienced (giving benefit of doubt here) developer whose position was replaced twice over, that runs critical systems on big-money (but not life-threatening) equipment and business systems. No documentation, no tests, even one that relies on a known platform bug to never be fixed (not kidding). Nobody knows how it works, and there is no time go go through it and clean it up (business decision).
I've even stated in writing on one system that I refuse to receive "emergency" calls on holidays, weekends, or nights to fix a particular system if it ever goes down at that time. Management refusal to plan ahead or heed warnings does not constitute an emergency on my part.
> Add internal politics where some things which are heavily multi threaded built in staticly compiled languages had to be built in single threaded nodejs because everyone is using nodejs and your language is now banned it's not surprising there's noticeable performance differences, it was certainly predicated early on but they where to caught up trying to be google and using the popular language to hire as many people as possible and terminated anyone raising valid concerns i.e it's slow and node wasn't right.
What languages were banned? This is a very interesting data point.
FYI, I have huge problems with your products at work. We use a variety of your products and competing products, on a project-by-project basis. I always advocate for your competitors.
One reason is that you made the business decision to attempt customer lock-in. You supported wikimedia-style text markup in your wiki (in addition to the GUI) so that people could migrate to your stuff, and then you took out that feature so that people would have trouble leaving. I'm sure that makes sense to an MBA, but I have been discouraging use of your products all throughout a large company.
The other reason is that yes, all your stuff is slow as fuck. OMG it is slow. The Java grows to consume gigabytes for no damn reason, and it munches CPU time, and generally it sucks pretty hard.
Golang might be right for you. I think it's the fastest choice available for development teams that can't handle stuff like pointers. On the other hand, I think you would still manage to sort-of-leak memory by hanging on to references that you really don't need.
I think you could fix the slowness problem by requiring all development and testing to be done on computers that are slower than the ones your customers use. Get an old Pentium II with 256 MiB of RAM... which is still overkill for the task at hand. Remember, back in the day we ran stuff like your products on computers with 8 MiB or less and a 486 or less. You can live with a Pentium II and 256 MiB of RAM, and the resulting performance of your software will delight your customers.
I'm skeptical that there isn't another side of this story or at least less absolute terms used.
For example, maybe there's a push against everyone using all their own favourites everywhere. I can see a strong argument for, "please stop writing in Foo. We use Java, Go, Python, and C++. Pick the right one of those for the job at hand. We all benefit from using a common set of tools."
I'm not denying what you're saying. Just feeling a healthy skepticism that they "ban" languagues without a good faith objective.
I’d probably stop talking on this thread if I were you. If your management chain sees this, you’ll have at minimum a conversation about using judgement when speaking in a public forum, at worst, your comments might be used as justification for disciplinary action.
Disparaging your former employer might also be against your employment agreement and doesn’t make you look good either. Just giving a friendly heads up!
It’s no secret. We’re trying our best to drag our 15+ year old codebases into the modern age. We’re hiring if you want to help! https://www.atlassian.com/company/careers
...until November, when these outrage terms come into force?
It also seems somewhat delusional (or perhaps "misguided", if I was feeling charitable) to try to hire in a thread bashing Atlassian for these downright scummy terms, and indeed for having tools that are simply awful to work with.
I clicked the link above, edited my cookie preferences on the pop up, waited 5 seconds for them to be updated... still updating so closed the tab. Modern UX sucks.
I will be honest, I was completely unaware of your other products, not that I would be your target audience. That being said, thank you for not pushing your other products on tracker users. That was something that always annoyed me with Atlassian.
Right number 19.
Article 19.
"Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."
So its a basic human right to be able to express how fast or slow Atlassians products are.
Further First Amendment to United states constitution
Amendment I
"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances."
https://www.law.cornell.edu/constitution/first_amendment
Wow, I last used Jira a long time back when it was pretty much new and loved it and I loved the values Atlassian had. It was far better than anything I'd come across before,
Kinda shocked to see all the negative posts on here. How can a product fall so much?
To be honest, most companies using Jira never actually configure it to match their requirements. That's not really their fault, Jira is a beast and quite a lot of time needs to be put into getting it just right.
Jira's always been hard to get going with, and it's still way better than pretty much everything else (once configured), but they are really bad at actually making it easy to use.
The (extremely large) company I work for has a product management team that customizes Jira to match what it wants which does not work at all for anyone else. Upper management of course doesn't understand anything about software development and just wants to see meaningless burndown charts. I am sure its possible to set up Jira is a useful fashion but damned if I have ever seen it done.
Are there any examples of companies taking action when these kinds of restrictions are ignored?
I would have thought terms like these would prompt people to release benchmarks for the sole purpose of generating bad PR if the company actually took action.
I'd also add that Atlassian are a well-known user of immigrant labour under an Australian visa scheme similar to the H1-B. They are hugely pro-immigration publicly also. (Despite this, their founder just bought a huge mansion in a very expensive part of Sydney completely protected from population growth due to restrictive bylaws).
I'd propose that they are hiring cheap workers just to keep the ship afloat, rather than to radically improve it.
I’d propose that you’re wrong - with relocation and visa costs it’s significantly more expensive to bring in someone from overseas. Not to mention it’s frequently 3-4 months after interviewing before they can start, as opposed to 3-4 weeks for a local hire. We hire as many local people as we can, the interview process isn’t any different.
The other aspect of the founders believing in the importance of being able to hire talent from overseas is actually at the higher levels - the local talent pool of managers with 10+ years experience running large SaaS orgs is pretty small, given the industry here is fledgling. We’ve developed our own leaders internally, but there’s no substitute for bringing in an external hire to develop the next generation of leaders.
I always wondered how or if it is possible to place arbitrary restriction on software use.
Also I wonder if a clause like this would be binding for tech journalists who run a benchmark because essentially they don't really agree to the license when they are testing software.
> Except as otherwise expressly permitted in these Terms, you will not [...] (c) use the Cloud Products for the benefit of any third party;
What does that even mean ? "benefit" is such loose language. Can I not use JIRA to build anything that 'benefit's my customers? Can someone with experience working on such terms throw some light on this ?
At one point I was writing test automation against their JIRA Cloud offering, because they didn't provide an analog to their authentication API in the JIRA on-prem version.
To get the tests to pass, I had to create a jiraRetryFixture and when that didn't work I wrote a preflight check which would just skip those tests if it wasn't available.
Slow/frustrating UIs can be one of the biggest barriers to productivity.
At Monolist (https://monolist.co), we’re building a streamlined task experience that integrates deeply with Jira (and Confluence) Cloud specifically so you don’t have to deal with these painful UIs.
No serious alternative to jira (if you have one, I would love to know about it).
Hard to find a decent alternative to confluence for an internal wiki, particularly one with a good ACL system. We need certain customer details locked to just the people servicing those accounts.
Jira's primary user demographic is by its very nature capable of building a replacement for it.
Build a bare bones system just like Git itself that keeps track of the data and ACLs, and let the silicon valley startup guys make fancy web interfaces and cloud packaging to make it palatable to middle management.
Like Git the core can be moved around and interacted with in the terminal so you're not dependent on any one vendor, and if you don't like any of their GUIs you can just work on the console.
The HN title is misleading: the terms do not forbid benchmarking, just public dissemination of the benchmarks. This is a standard software EULA clause.
One reason we decided to keep this clause in the Caddy EULA (which I should clarify here only applies to official binaries, not the open source, Apache-built binaries you can make yourself) is because we found out that very few people are expert enough to benchmark correctly. I've read a dozen Caddy benchmarks, for example, that turned out to be based on false assumptions or had hidden factors or were simply not reproducible (and not just by me).
Benchmarking requires expertise that, it turns out, very few people have. I don't think I even have enough skills to do it correctly and meaningfully.
Also, web servers are complex enough (in terms of both configuration and all the layers involved with networking stacks) that one correct benchmark is not generally useful to the next person.
Spreading wrong performance information can hurt a business. It's not that there's anything to hide or any desire to take away your freedom -- and I would normally be one to assume the worst from any large company -- it's just business: they don't want the risk of bad PR based on a possibly false premise, especially when that information tends to only create negative hype rather than actually being useful.
Anyway, this link doesn't seem like news. Just usual HN hype.
>> The HN title is misleading: the terms do not forbid benchmarking, just public dissemination of the benchmarks.
If a tree falls in the forest, does it make a sound?
The title is not misleading. Performing 69,000 benchmarks but being unable to publish them is de facto banning benchmarks, period. Hiding behind the idea that benchmarking is hard therefore it should be banned because it might be fake news is ludicrous.
It is also not standard software EULA licensing, unless you think Oracle's practices are somehow industry standard and good for everyone.
This way they can make you spend a ton of time bringing up a trial version on your internal systems to do the benchmarking. Effectively forcing lock in and sunk costs fallacy thinking then put you into tons of meetings that are really just attempts to upsell.
Honestly, if I had widely used software with an enforceable EULA term which allowed me to benchmark but not publicly disclose bad results I found, it would make for even worse PR for the company: I'd be able to go to a tech industry reporter saying "I ran benchmarks on this software, and I think many people would be interested in the results, but the company forbade me to release them publicly. I will still share them non-publicly with any interested parties under NDA." Or if private dissemination were also forbidden, I'd change the wording accordingly.
The better way for a company to handle this concern, if they feel it's important, is to proactively run and release benchmarks including commentary on the results, together with everything necessary for anyone to reproduce their results. Even better if they fund a trustworthy neutral third party to do this instead, with proper disclosure of the funding.
They can then respond very effectively to bad PR about badly done benchmarks. Unless their performance is actually bad, or course.
That's certainly how things should be. But this clause was one of Oracle's early innovations (https://danluu.com/anon-benchmark/) and they did pretty well. Do we need to understand how they got away with it to have a good chance of changing the norm?
Question for any lawyers - can they gag any benchmarks that are published against the terms? I'd imagine the license terminates - what kind of damages can companies realistically extract when you break your license agreement?
Clauses like these make me think you think there is. Which is perhaps a bigger red flag than the attempted censorship in itself.
> Spreading wrong performance information can hurt a business.
Firstly, I'd suggest you should perhaps focus on trying to educate your users on how to make better performance tests - if they are bad at making benchmarks then they are likely bad at running your server as well.
Secondly, boohoo. Not spreading unflattering but correct performance information might not hurt your business but will hurt your customers.
Lastly, you are curtailing speech which isn't ethical and due to that I'm pretty ambivalent if any hurt visited your business. Imagine if everyone did that about everything.
> complex
> requires expertise
I'm sure every despot anywhere used a variation of your argument, something about economics being complex offshoot of mathematics that requires expertise to handle so you best not share any overly rushed opinions.
> Apache-built binaries
Which people can benchmark to their heart's content and I'd hope the clause would irritate many people into publishing their own benchmarks..
I think you should also include restrictions in your EULA to prevent people from publishing statements about your code quality too. Very few are qualified to measure it, and spreading potentially false information can hurt a business. I can think of several other items that have limited expertise in evaluating and could cause negative PR harm if done falsely, so you probably should just enumerate those in the EULA too.
Also, I gather from your comment in support of these restrictions, they are more than boilerplate and you will pursue offenders.
I think car companies should prevent people from publishing safety or performance analysis of their vehicles. And restaurants should disallow public reviews of their business. (Very few people have the culinary knowledge to properly and objectively assess a restaurant's quality.)
Nope. You don't need to protect us from people who do a bad job benchmarking we can do that ourselves. If we read their methodology and disagree we can make up our own minds. Anyway my experience is terrible with Atlassian's services so without benchmarks I can just continue to assume that is true forever and always recommend against them when we're making infrastructure decisions. I guess I can thank you for a big bundle of products less to consider.
I get your point, but I also think you're wrong about it only being harmful.
Even if the review is done incorrectly, it's a data-point on a misconception your users have, and it gives you a chance to respond accordingly.
The company I work for has to deal with performance complaints all the time, some very from very public and loud entities in our Market. A company deciding to move away from our product signals very clearly across the market and it shows in our renewals numbers -- yet, we've never considered saying "don't benchmark us" in our contract.
Every single time these complaints kick up, we treat it as a chance to prove that we know our stuff, we reach out and offer to assist with re-evaluating, and explain our position, and most of the time, it works. It also shows a big public commitment to helping instead of just hiding behind our Legal Team, and it shows we know what we're doing.
We're not a huge company by any means in terms of actual persons able to act, but we still make it work without the need to put out such terms. If your customers are frequently doing something to make your product less than ideal, then you have an education issue that needs to be resolved.
I'm surprised and disappointed by this stance. Other people in this thread have mentioned that such a clause isn't even legally enforceable, so I'm curious if you have any perspective on that.
Ultimately, though I appreciate you sharing your perspective, I personally will be steering clear of caddy and your other work.
For me if benchmarks are forbidden, it means that software is significantly worse than its competitors. So this clause by itself is worse than any benchmarks. There might be exceptions like Oracle, but not everyone is Oracle.
> Benchmarking requires expertise that, it turns out, very few people have. I don't think I even have enough skills to do it correctly and meaningfully.
Very important and often overlooked point.
But I wonder, why not forbid public dissemination of inaccurate, non-reproducible benchmarks?
> Spreading wrong performance information can hurt a business.
The problem is when the benchmark is accurate, reproducible, and based on a completely nonsensical scenario. It's not libel to say "Jira is 50% slower than Bugzilla" when your benchmark serially creates 25000 tickets, but it's not a fair claim either.
That's not to say that anti-benchmarking licenses are the right solution, of course. I can just sympathize with why Atlassian wants one.
Suppose you state that a product performed poorly on measurement Y of benchmark X. The problem is not reproducibility or accuracy, it's that either Y or X are always really stupid in some way or another.
No syntax highlighting on diffs, horrible defaults (closing development branch when you merge into master???), not being able to make a PR after you make one commit to a branch without refreshing and loosing your message, inconsistent code formatting that is just horribly broken in general, weekly downtime that's not reflected on their status page, having to manually press a button to see updated diffs after updating a branch, no support for signed commits, API support lacking in the weirdest places, random failings in commit webhooks, etc etc.
God I hate it.