Hacker News new | past | comments | ask | show | jobs | submit login
Atlassian's new terms forbid benchmarking (atlassian.com)
332 points by adtac on Sept 29, 2018 | hide | past | favorite | 195 comments



Can we please talk about how terrible BitBucket is?

No syntax highlighting on diffs, horrible defaults (closing development branch when you merge into master???), not being able to make a PR after you make one commit to a branch without refreshing and loosing your message, inconsistent code formatting that is just horribly broken in general, weekly downtime that's not reflected on their status page, having to manually press a button to see updated diffs after updating a branch, no support for signed commits, API support lacking in the weirdest places, random failings in commit webhooks, etc etc.

God I hate it.


I've also tried the 10 user licenses for both Jira and Bitbucket servers. I absolutely loathe Jira. I respect its vast featureset, but user experience as a programmer has felt absolutely abysmal. The React redesign only confused me more. To find the Kanban boards, you had to hit the search button! I hope Jira has improved since last time I used it, but I have to say I'd probably prefer just about anything else at this point.

As for Bitbucket.... It works, but I can't think of a whole lot it does that GitLab doesn't do better.


Yeah, Jira is horrible. Between the workflows, search filters (create one, apply it to a board, but then you can't edit it?) and everything else, it's really clunky to use.

I've done test runs with other tracking software, and I can't really find anything better. Every tool sucks in its own way. Do you have a recommendation on something that you've found that's better than Jira?


Not really. This seems like a largely unsolved problem.

At my previous startup job we evaluated Jetbrains YouTrack and GitLab issues. Both were fairly competent looking, both had nicer interfaces, but nothing has the same feature set or ecosystem as Jira. GitLab now has multi-project boards and help desk support, so it's actually getting to be pretty useful in it's own right for issue tracking.


I like Phabricator, it doesn't offer as much combustibility as Jira but the options it does have are good for software development workflows. Plus it is open source which is always a bonus

https://phacility.com/phabricator/


Is this link a joke? Reading down the page it gets progressively snarkier, definitely in a way that leads me to think this is satire.


The phabricator people used to have extremely bad taste in diffs and version control. See eg https://stackoverflow.com/questions/20756320/how-to-prevent-...

(I think you can turn most of the annoyances off. But it leaves a bad impression. They also seem to like PHP.)


No, it’s entirely legitimate. And thank $deity for those small pockets of humor.


Thx for the advice this looks like a nice tool. If there was enough "drive" of some dev's and a thriving community this could become a standalone alternative for Jira one day.


> doesn't offer as much combustibility as Jira

Autocorrect is such a prankster


We have recently switched to Clubhouse, and while the UI is a bit cluttered it seems like a good fit so far.


Use Youtrack, it is from Jetbrains. Very flexible (you can write forkflows in a programming language), much faster, great keyboard shortcuts and much better interface.


For tracking my personal task list I'm quite liking emacs org mode so far but I'm still getting the hang of it. It's not really a solution for teams though.


Jira is still horrible, I've only been using it a few months so I don't know if it's improved since last time you used it but if it has then it hasn't appreciably moved the dial from abysmal. I die a little inside every time I have to interact with it.


- Throw Bitbucket away, I don't even know why they wrote it in the first place - Use GitLab, even it's CE edition is a lot better than everything Atlassian does with Bitbucket and Bamboo combined


> closing development branch when you merge into master???

Why's that a horrible default? It doesn't happen to match the workflow you use? It matches the one I use. So I can understand it's not ideal for you but they can't suit everyone with a binary default so what makes it so 'horrible'?


Github handles it better. It lets you decide whether or not to close the branch, lets you protect certain branches from being closed, etc.


Bitbucket I use gives you the option as a checkbox upon merging.


Remember Atlassian in their infinite wisdom renamed the on-premises product formerly known as Stash to Bitbucket despite being a completely different code base to Bitbucket.com. I think they are talking about Bitbucket.com.


The bitbucket.com website gives a check box to select whether to close a branch though. Unless you mean that the on-premises product doesn't?


The on-prem one I use has checkbox when you merge. It's unticked by default.


The on prem version I was using a month ago had a checkbox, ticked the last way you checked it by default.

So if you merged feature -> develop -> master and remembered to delete feature it would delete develop by default as well...


So, if my company has a workflow where that box should always be checked or not, and rarely overridden by the user, ...


A default should not do something destructive or pseudo-destructive. That’s pretty much against most UX standards.


> It doesn't happen to match the workflow you use? It matches the one I use.

You're in the minority here. For the majority of software projects branches like `develop` are usually considered "long living" and are not "merged in" when arriving at `master`. That's why we have things like Git Flow [1].

Also an action that defaults to destroying something is user-hostile, so that's why it's a big deal and thus why GP wrote their comment.

  [1] https://leanpub.com/git-flow/read


I use the feature branch Git workflow. How do you know the majority of people here use Git Flow rather than feature branches? Have you just guessed that?

And anyway Git Flow still has feature branches that you close on merging!


Bitbucket provides free private repositories, which I really enjoy for personal projects. I never log into the site though, just a git server I don't have to manage myself.


Gitlab. Same thing, massively better performance, feature set, etc. And you have the option of taking it in-house should a project scale to the point where that's useful.

I migrated everything off BitBucket to Gitlab. I'm forced to keep a Github account because it's expected. As soon as enough people move away from Github to allow me to drop it, I'll migrate those too.


Exactly. They're just my upstream private hg backup and I couldn't care less what their website looks like.


You get free private projects with Azure DevOps too, and it's a great product to work it.


Bitbucket has been offering me free private repos for years, and it's basically never been down. It integrates perfectly in everything I use in my workflow, while sheltering itself from the hype train and avoiding providing useless features just because they are cool. Hands down one of the best products out there. My 2 cents!


> never been down

Oh, I beg to disagree. The last 8 months have been much better, but in 2015-2017 it was like 3-5x per year that I had to tell my boss that we can't deploy because Bitbucket wasn't triggering the CI server.

edit: come to think of it, I switched timezones from California to Asia. I haven't run into as many problems because no one is awake to break something :).

http://status.bitbucket.com/


It went down literally 3 weeks ago, preventing deploys. Fetching refs has a huge latency some days.

Syntax highlighting on diffs is not a useless feature.


“Never been down” is an outright lie.


https://status.atlassian.com/history

We keep an ifttt webhook into this page, very helpful to resolve the WTF chorus when something is busted. Props to Atlassian for transparency: please keep this going.

We use confluence, bb, and jira. The good news is that triad is very nicely integrated: you can easily crosslink issues between them. There is definitely room for improvement on uptimes though.


As I said in my original comment, the performance issues and downtimes we experience are just not present on the status page.

As one data point, several weeks ago it would take BitBucket over 70 seconds to respond to a `git ls-remote` with a repo containing about 200 branches. Usually this takes ~10 seconds or so, but it caused all kinds of headaches with Jenkins. On top of this pushes where incredibly slow.

Status page was all green.


Yeah we've seen that too.

They definitely need more alerts pertaining to slow API responses on the git interfaces. I still give them props for usually fessing up to outages. Some places (cough aws) will be out for hours before they admit to it -- not gaming any numbers, nope never.


I confirm this.


For syntax coloring in BitBucket, in Chrome at least, try the Refined Bitbucket [0] extension.

[0] https://chrome.google.com/webstore/detail/refined-bitbucket/...


That one needs an extension to accomplish a fundamental feature in code tools is a bit ridiculous. Syntax highlighting isn’t even that hard.


I use this all the time, fantastic extension that makes it useable, unfortunately it's broken with the recent Firefox betas.


I assume this is Bitbucket.org?

Because Bitbucket Server (on prem) has been fantastic for us and is continuously improved. From what I know, it's a different codebase though (was once known as Stash)


there's no other tool at the same price-to-features ratio that Bitbucket gives.

The included Bitbucket Pipelines is also very very cool. at 30 developers, you pay a 1$ per month cost. Gitlab is at 4$ minimum.


Yeah but GitLab delivers a better service and if you don't even have 4$/month then you should write a new business plan ;-)


The problem is that everyone assumes all businesses operate out of the US.

4$ buys you 4 meals in India.

But the real reason is, startup funding rounds are generally 1/4th of the size for an equivalent stage as compared to the US.

every bit counts.


No merged PR builds in CI either. Bitbucket is terrible.


Not allowing people to talk about how you application performs is not compatible with GitLabs value of transparency. I've added a commitment to always allowing people to do this to our stewardship promises https://gitlab.com/gitlab-com/www-gitlab-com/commit/da81150e...


Thanks a lot for adding that!

This is a bit offtopic: (OP here) the reason why I was even reading Atlassian's terms is to understand what kind of language is used in enterprise self-hosted software because I need one for my own product. I really like GitLab's open core model (as long as the core is still perfectly usable), and I was wondering if I could ask you some questions about it as I'm looking to adopt something very similar? Could I contact you over email about this?


For sure. Maybe you want to talk to Jamie, she knows most about this. Her email is jhurewitz at our domain.

Please have a https://about.gitlab.com/handbook/ea/#pick-your-brain-meetin... (public video) so others can benefit as well.

Please reference the url of this response in your email to her.


Brilliant, thanks a lot Sid!


You're welcome. Jamie just updated our terms to allow benchmarking explicitly https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/...


Atlassian has always forbidden to talk about the performance of their products in their ToS and in their previous EULA. We all know why, but we don’t talk about it.

One of my customers even raised the issue with them, saying it would cause harm to the company if it came to be known, but they dodgeball’ed it with legal.


What does that have to do with Atlassian or any other company?


They're a competitor.


You call this a competition? :-D

Basicaly GitLab is a competitor to GitHub.

Atlassian allways wanted to compete with those but will never succeed.

Just like they had to admit that HipChat and Stride are worse than Slack and they'd never get to the point that someone would want to use their chat tools, they will have to admit that Bitbucket + Bamboo will never be able to compete with GitLab or GitHub.

In my opinion the only really good things are Jira and Confluence.

There is no standalone tool that does the Jira stuff as good as they do.

There has never been an enterprise wiki which was so usable and consistent like confluence. Even if I have many paintpoints with it, I don't know something better for a wiki solution that has to be used by everyone from management to development (dev's cry the most about it).


> I don't know something better for a wiki solution that has to be used by everyone from management to development

There’s actually a lot of enterprise wiki software out there that is comparable or better. One example from a previous job: http://twiki.org


Don't know what you mean by "enterprise wiki solution" but MediaWiki works just fine.


Gitlab basically has most of Jira Software’s features.


Here's a benchmark for you, Atlassian: your software is slow as molasses and getting worse with each redesign.


It seems like they're balancing features with speed. In order to get a bunch of extra features, every page load needs to be 200mb of extra crap that 99% of users won't use 99% of their page views.

Oh and hamburger menus are hiding the key features you DO use every page view.


I always thought of Atlassian projects as inheriting all the problems of Java, exposed in UX form.


Its always the people/expertise, rarely the tool or language.


Fair. But for purposes of language = ecosystem, I'd definitely say organizations trend to mimic language structure, and vice versa.

At least to a first order, palm-reading, internet comment approximation. ;)

Modern Java, IBM

C# .NET, Microsoft

React, Facebook

Rust, Mozilla


I'm not sure what's more concerning, that you're calling React a programming language or that you're unaware that Facebook is written in PHP.


I believe you missed the point for the trees. And for the record, I'm under no illusion that Firefox is fully written in Rust either. :p

https://4e6.github.io/firefox-lang-stats/


Why doesn't JIRA offer a native desktop app? Everyone using it is on PC, Mac or Linux right? And then a cut-down iOS (iPhone and iPad) app covers the rest?


A few colleagues use a Visual Studio extension, although that seems to be mostly handy for time-tracking and limited interaction with issues. It's also stuck in a UI that goes back to 2008, so perhaps rather fitting ;)


An Electron wrapper around the web app will do wonders for productivity and RAM utilization.


The new design for Jira really sucks, has made everything slower, and crappier to use. Page load times of 10+ seconds are common. And no clear usability gains.

I was a huge fan, but any more I just want to use GitHub.


Please talk to your admins! In my experience, Most Jira slowdown issues can be traced back to permission checks. (And you can enable tracing in the system settings to verify this)

If only your team needs to access it, set the instance to open permissions (I.e. most rights are set to “Everyone”) and then control access using the network or with a proxy in front. This took the instance I run at $dayjob from infuriatingly slow to no slower than any other web application.

—Source: 5 years and counting Jira admin


This is knowledge that could come in handy one day, so thanks.

But fine-grained permissions are a fairly regularly cited reason for using Atlassian’s stuff over other (simpler...) offerings. This sounds a lot like “don’t try the fancy stuff because it’s unusably slow”.


I'd be willing to bet part of that is too many fine-grained permissions. It's a bit of a footgun.

Basically, anytime you hit a ticket's page, Jira has to scan your account's group memberships and compare that against a litany of permissions to determine whether you can even see the ticket and what actions you can take with each of its fields. This is done to avoid showing you UI elements for things you can't do. If a given permission is set to "everyone", the check simply isn't done (kinda the equivalent of replacing a call to the user directory with a "return true;")

I'm not talking entirely about reducing security here. I mean that if everyone in your team is a member of a certain security group, and only your team touches your Jira, set that permission to "everyone" rather than using that security group - the lookup is completely unnecessary in this case.

Basically, you want as few permissions as possible to ensure the level of security you actually need. Often this isn't done, people go a bit crazy with permissions schemes trying to segregate this and that.


Yea, they didn't say anything about UX complaints, for which there are MANY.


You don't like up to seven toolbars on a single page?


This is also very similar to the clause that allowed Larry Ellison to allegedly try to have a professor fired for benchmarking Oracle: https://news.ycombinator.com/item?id=15886333

I was just going through theirs (and other) terms and conditions to understand what kind of legalese goes into these and I was appalled at this. While these terms aren't effective until after 01-Nov-2018, I tried searching online to see if there was any discussion about how stupid this is, and this unanswered forum post came up.

I have no idea how this is still considered acceptable.


It’s not, the Consumer Review Fairness Act prohibits this practice.


That's the first I've heard of the CRFA - thanks for mentioning it.

I was curious whether or not the CRFA could be overridden by contract terms, but the FTC claims quite the contrary: https://www.ftc.gov/tips-advice/business-center/guidance/con...


A contract provision that is unlawful can’t be enforced.


I was curious whether or not the CRFA could be overridden by contract terms

The whole point of it is to override and render invalid/unenforceable terms which try to forbid posting honest reviews of companies.


I just looked at the text of the law. As the name suggests, it only applies when an individual is a party to the contract. If a company is the licensee, it wouldn't apply.


Plenty of proprietary software has these restrictions, Datomic, MS SQL Server, etc. I suppose you could make a case that most published benchmarks will be flawed or inaccurate, but it's still wrong to do.


I've submitted many benchmarks over the years for new jira and confluence UI "upgrades". I've received responses to the tune of, "our developers don't see that behavior issue on Windows" even though the test systems I used were Linux based and clearly stated as such. Speaking frankly, the new jira ui is fucking shit and makes my job slower due to the fact that pertinent information is obscured and only clearly displayed when I click "use old interface". This means that I must first wait for the whole page to render, click a link, and wait for the entire page to redirect to the old UI and render again. Multiply this for every single ticket I have to open and you can see the minutes tick off the clock.


> Speaking frankly, the new jira ui is fucking shit and makes my job slower

Yep. Bitbucket as well. In some of the repo config screens you get a save button in others you don't. No consistency at all.


Where is the “old interface” link?


For me, it's in the top right corner of every issue.


Huh, I haven’t seen a link like that in months. Unless... is it inside that “...” menu?


I feel like there is a subtle pattern of attitudes and behavior that I pick up on from Atlassian. It’s almost as though they don’t get that they need to keep working to innovate and please their customers to stay relevant. Look at what happened to Hipchat, and what I’d argue will eventually happen to JIRA.


It’s almost as though they don’t get that they need to keep working to innovate and please their customers to stay relevant.

Like all enterprise software vendors their customers are the ones who buy it, not the ones who use it. In almost all cases the person signing the cheques will never experience any of the issues themselves.


>eventually

This happened like 2 years ago at least, every update to Jira makes it slower and less usable, I have given them this feedback many times, and I suspect I am not alone, and they simply do not care. Until they see subscriptions drop, they will do nothing, they are like Rational Software or CA when they hit the big time and started buying up all their competition.


Which CA are you referring to here?


That's not the best comparison, HipChat is an ant compared to JIRA, and Atlassian gave up on HipChat only a few years after acquiring it.


Slack pretty much killed hipchat and drove directly into enterprises - which should have been atlassian sales. Once you have your tentacles in enterprise (and slack are almost there), it’s very easy to sell huge contracts of pretty average software. JIRA is great, but it’s not that complex, and it would not take much to start chipping it away


JIRA is quite complex, but it suffers from a Swiss army knife complex, and at some point you end up "programming in JIRA" instead of using a much simpler purpose-built tool, or even building one.


i.e. the Inner Platform Effect (https://en.m.wikipedia.org/wiki/Inner-platform_effect), and all the worse for the programming being done without the benefit of programmers, in this case.


If your processes are complex, you're likely need flexible tool which allows arbitrary workflows. Of course you might argue that complex processes should bend to match tools, but not everyone who pays for software will agree, so Jira definitely has its market as a flexible tool even if it means that there should be dedicated Jira developers.


I heard from a friend at a fortune 100 company you would all know, that they tried piloting hipchat internally that hipchat simply did not scale well. Their servers choked under the load so frequently they got out of the contract.


> JIRA is great, but it’s not that complex

You and I must use a very different JIRA.


Not complex? We have to hire entire teams of people just to handle that confangled mess of options and wierd defaults and strange behavior to not slow down the developers and everyone else.


Jira is complex but no one needs that complexity. Jira is built to sell to executives, not for anyone to actually use. Executives need to know that they'll be tracking their employees time down to the second with Jira.

But then they're never going to look at the UI for Jira either because they're too busy for that.


I'm not gonna defend Jira. I don't like it. But this is the same thing people like to say about Office. It turns out that the 80/20 rule (80% of users only use 20% of features) never guarantees that even 50% use the _same_ 20%. You need to implement a lot more of Jira's complexity than you might think to get a significant part of Jira's market share.

(Slack is another good example. Slack is "just" an IRC client with better emoji/GIF support. But I'll be damned if Microsoft Teams doesn't _really suck_, even with the benefit of knowing how Slack does everything. Good software is hard and takes work.)


> I'm not gonna defend Jira. I don't like it. But this is the same thing people like to say about Office. It turns out that the 80/20 rule (80% of users only use 20% of features) never guarantees that even 50% use the _same_ 20%. You need to implement a lot more of Jira's complexity than you might think to get a significant part of Jira's market share.

Trello was the counterexample: it implemented less than 20% of Jira's features and was all the more useful for it. (thus Atlassian bought it and have started ruining it with bloat).


If Slack had an on-prem product, HipChat would be toast.


Hipchat is already toast, it was being replaced with Stride, and Stride got the axe. Atlassian has now partnered with Slack for chat.


And now they don't have an on-prem solution. Very upsetting.


Jira + Confluence Clouds Performance Problems are the Number One reason for our company to actively seek for ways to get the Atlassian needle out of our arms. It's really tough once you've invested the time to get Jira just the way you org works, but the Issues are just piling up and the complaints grow louder each month.


Shameless plug: If you’re looking for a Confluence replacement would love for you to try my startup’s product, https://tettra.co. We have Github and Slack integrations and are working on Jira soon too.


Your pricing page says this:

"We don't plan to offer a hosted version of Tettra in the immediate future. We believe that secure web services are the new standard for business applications and take security seriously."

Seriously, NO.

It has to work without an internet connection at all. That includes installation, updates, and help pages. People who care about security have air-gapped networks. To update software, we burn it to a DVD and then walk it over to the secure network.

I have a hard time understanding how anybody would tolerate their business secrets being on your servers. That is really weird to me.

It's also just slow.


Uh, this is so far outside the norm of every company I’ve ever worked for/heard of that I think they’re safe not worrying about you in their business plan. Unless you’re working with classified data, that is — but those cases have tons of other requirements beyond working air gapped.

Now, there are other reasons you would want on-prem other than air gapped networks, but that’s not the discussion we’re having.

Edit: oh, this guy is a troll. Just read his other comments. Sigh.


If it is "so far outside the norm", then that could be why people keep getting hacked and facing clone companies popping up in other countries. Where I work, with hundreds of people, we aren't making this mistake.

As for trolling, "Assume good faith." is in the site guidelines: https://news.ycombinator.com/newsguidelines.html FYI, I'm not trolling, and any appearance of it is due to cultural differences.


What sorts of issues? For me, performance is the issue, I've never found JIRA buggy or inadequate, and I'm generally able to find a given ticket and interact with it in the expected way.


> I've never found JIRA buggy

I'm not sure how you can have used the UI and hold this option. I feel like I've been greeted by the same basic bugs time and time again, like clicking on an issue ans it not opening until the second click, or opening the previously viewed issue.


It loses drags too. Just awful.


Jira performance takes a fair amount of experience to nail down. If anyone needs help, that's what my companys (https://atlasauthority.com) core focus is. Spent 4 years at Atlassian in support, and now we work with a number of the largest deployments in the world to help make things fast(er). It won't ever get to the sub second standard we like to see from modern SaaS apps, but it also doesn't need to be 12s page load times like we often see in client environments.


How about JQL? Their query editor is so buggy.


I have used JIRA for 5 years and came to it begrudgingly when the project was complex enough. It handled the complexity, details and customizations of the project well enough though.

It's funny reporting benchmarking is being banned. The guise of web apps being too complex to be reasonably communicated is laughable.

There is a 15 year old bug open to reduce the flood of emails Jira can create from updates. JRASERVER-1369 [1] could be it's own community.

With minimal java/jvm tuning experience, it seems likely the oncloud product is aggressively cached and resource throttled. Once a page or filter has loaded, it generally loads quicker.

The speed of an on premise install of JIRA compared to the cloud can be staggeringly faster.

One solution might be for some clever person to first crawl an entire Jira or Confluence site, and then continually ping all the pages to keep the system performing better.

[1] https://jira.atlassian.com/browse/JRASERVER-1369


[redacted by author request]


Atlassian employee here. JIRA's backend is not Node based. It's mostly Java. We do use Node for other products, including Trello, and it's more than fast enough.


I think of Atlassian as the Trump of software.

It’s slow, annoying UX, and very inconsistent.

But for some reason businesses like it, it is what it is and as an employee I have little choice so I have to bear with it.


> Jira is 10 years of technical debt

> Cloud was full buy in to AWS...

> everyone is using nodejs and your language is now banned...

> Add internal politics... hire as many people as possible and terminated anyone raising valid concerns...

> monolith service now becomes highly distributed...

As some comments mentioned before, there are perfectly good software written in NodeJS (trello) which are blazingly fast. Similarly, there are enough good, well performant multi-tenanted applications on AWS. Anything that has been used for a long time (linux, java, protocols, etc) is always going to carry tech debt and there is nothing wrong in that.

While there may be a grain of truth in some corner cases of what is written, it overall comes across as an emotionally negative let off, and in some places borderlining on indirect propaganda against generic stuff (sorry).


In this day and age, languages, frameworks, and libraries that deal poorly with parallelism deserve criticism for it.

Yes, competent engineers can work around these problem given time (e.g. pivoting from thread-level to process-level parallelism), so it wouldn't be fair to entirely blame nodejs for the problems, but that wasn't the vibe I got from GP.


Finally some sort of explanation. Thee feature set is great for complex use cases but the cloud version is unusably slow. Worse the support team seems trained for pretend like it's not a problem. The last web app I remember being as slow as Jira is Friendster.


The hyper distributed micro services are definitely one of the big reasons for unnecessary complexity and latency problems. It was surprising for me to see that the JVM based services were load tested for 4-10 minutes to call the test successful.


Yeah it's odd as some very clever talented people have passed through there doors, some very mixed feelings towards them.

The JVM is just getting warmed up load testing for that short a period.


Do you know of any sources that show it takes that long to warm up JVM?


It has nothing to do with time and everything to do with the number of times hot paths get executed.

On a server JVM instance a method will have to be executed (by default) 10 thousand times before it is compiled.

There is also a question of how stable the compilation profile is and how likely deoptimizations are but that's a different story.


> nodejs because everyone is using nodejs and your language is now banned it's not surprising there's noticeable performance differences, it was certainly predicated early on but they where to caught up trying to be google and using the popular language to hire as many people as possible

I wonder how typical is this nowadays in IT companies and departments all around the world.


At least somewhat.

I'm aware of a nodejs script that is basically packed up with the node runtime and run on Windows as a "native application," which interacts with the filesystem, windows registry, and more. The packer is years old and nobody knows how it works.

It is everything you expect of such a program. From both a user and developer perspective.

It is deployed on very expensive, very important equipment in pharma labs and hospitals.


This is less uncommon than people like to admit. I've seen, and in some cased directly involved with, code written by an inexperienced (giving benefit of doubt here) developer whose position was replaced twice over, that runs critical systems on big-money (but not life-threatening) equipment and business systems. No documentation, no tests, even one that relies on a known platform bug to never be fixed (not kidding). Nobody knows how it works, and there is no time go go through it and clean it up (business decision).

I've even stated in writing on one system that I refuse to receive "emergency" calls on holidays, weekends, or nights to fix a particular system if it ever goes down at that time. Management refusal to plan ahead or heed warnings does not constitute an emergency on my part.


> Add internal politics where some things which are heavily multi threaded built in staticly compiled languages had to be built in single threaded nodejs because everyone is using nodejs and your language is now banned it's not surprising there's noticeable performance differences, it was certainly predicated early on but they where to caught up trying to be google and using the popular language to hire as many people as possible and terminated anyone raising valid concerns i.e it's slow and node wasn't right.

What languages were banned? This is a very interesting data point.


No languages were banned. We use Java, Python, Node, and Golang at Atlassian, possibly others, too.


FYI, I have huge problems with your products at work. We use a variety of your products and competing products, on a project-by-project basis. I always advocate for your competitors.

One reason is that you made the business decision to attempt customer lock-in. You supported wikimedia-style text markup in your wiki (in addition to the GUI) so that people could migrate to your stuff, and then you took out that feature so that people would have trouble leaving. I'm sure that makes sense to an MBA, but I have been discouraging use of your products all throughout a large company.

The other reason is that yes, all your stuff is slow as fuck. OMG it is slow. The Java grows to consume gigabytes for no damn reason, and it munches CPU time, and generally it sucks pretty hard.

Golang might be right for you. I think it's the fastest choice available for development teams that can't handle stuff like pointers. On the other hand, I think you would still manage to sort-of-leak memory by hanging on to references that you really don't need.

I think you could fix the slowness problem by requiring all development and testing to be done on computers that are slower than the ones your customers use. Get an old Pentium II with 256 MiB of RAM... which is still overkill for the task at hand. Remember, back in the day we ran stuff like your products on computers with 8 MiB or less and a 486 or less. You can live with a Pentium II and 256 MiB of RAM, and the resulting performance of your software will delight your customers.


[redacted by author request]


I'm skeptical that there isn't another side of this story or at least less absolute terms used.

For example, maybe there's a push against everyone using all their own favourites everywhere. I can see a strong argument for, "please stop writing in Foo. We use Java, Go, Python, and C++. Pick the right one of those for the job at hand. We all benefit from using a common set of tools."

I'm not denying what you're saying. Just feeling a healthy skepticism that they "ban" languagues without a good faith objective.


I’d probably stop talking on this thread if I were you. If your management chain sees this, you’ll have at minimum a conversation about using judgement when speaking in a public forum, at worst, your comments might be used as justification for disciplinary action.


His management has already nudged him to find a new job.


Last comment

This was a bit back but not to far back.

Took a significant pay rise. elsewhere and got to work with some other ex atlassian colleagues.


Disparaging your former employer might also be against your employment agreement and doesn’t make you look good either. Just giving a friendly heads up!


Does Google even use nodejs or try to hire devs for 'popular languages'? Last I heard everything is still mostly C/C++, Python and a little bit of Go.


Google is one of the bigger Java shops out there. That, if anything, is their first-and-foremost language outside of search.


It’s obvious they know they have performance problems. They think they can keep it secret by pulling this kind of crap.


It’s no secret. We’re trying our best to drag our 15+ year old codebases into the modern age. We’re hiring if you want to help! https://www.atlassian.com/company/careers


Then what’s the deal with these terms?

I would have serious reservations about working for a company that bans publishing benchmarks of its software.


> It’s no secret

...until November, when these outrage terms come into force?

It also seems somewhat delusional (or perhaps "misguided", if I was feeling charitable) to try to hire in a thread bashing Atlassian for these downright scummy terms, and indeed for having tools that are simply awful to work with.


After 4 years of dicking around with Jira/Confluence, we finally got rid of them last month. Haven't looked back since.


What did you replace it with?


We replaced Jira+Confluence with Notion. It might not work for everyone, but it definitely fits our use case. (small dev team)


Not parent, but we replaced Atlassian with Github and Pivotal


Tracker is one of our best ambassadors, I think.

I miss it sorely when working with Github Issues.


Tracker?


Pivotal's project management software

https://www.pivotaltracker.com/


I clicked the link above, edited my cookie preferences on the pop up, waited 5 seconds for them to be updated... still updating so closed the tab. Modern UX sucks.


Oh wow. So yeah, I guess we call it pivotal so much at work that I didn’t even realize it was actually called Pivotal Tracker.

It’s a great product. The speed and ease of use is amazing.


Looking at the Tracker website, it's interesting that we play down the connection back to Pivotal.

Which I guess makes sense, Pivotal as a whole is aiming more towards capital-E Enterprise.

Our headline products, for reference: https://pivotal.io/products


I will be honest, I was completely unaware of your other products, not that I would be your target audience. That being said, thank you for not pushing your other products on tracker users. That was something that always annoyed me with Atlassian.


I question companies prohibiting basic human rights. Free speech is a basic human right.

Link to UN human rights https://www.un.org/en/universal-declaration-human-rights/

Right number 19. Article 19. "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."

So its a basic human right to be able to express how fast or slow Atlassians products are.

Further First Amendment to United states constitution

Amendment I

"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances." https://www.law.cornell.edu/constitution/first_amendment


Wow, I last used Jira a long time back when it was pretty much new and loved it and I loved the values Atlassian had. It was far better than anything I'd come across before,

Kinda shocked to see all the negative posts on here. How can a product fall so much?


To be honest, most companies using Jira never actually configure it to match their requirements. That's not really their fault, Jira is a beast and quite a lot of time needs to be put into getting it just right.

Jira's always been hard to get going with, and it's still way better than pretty much everything else (once configured), but they are really bad at actually making it easy to use.


The (extremely large) company I work for has a product management team that customizes Jira to match what it wants which does not work at all for anyone else. Upper management of course doesn't understand anything about software development and just wants to see meaningless burndown charts. I am sure its possible to set up Jira is a useful fashion but damned if I have ever seen it done.


Are there any examples of companies taking action when these kinds of restrictions are ignored?

I would have thought terms like these would prompt people to release benchmarks for the sole purpose of generating bad PR if the company actually took action.


Two companies, everything else being same; One with JIRA, the other GitHub Issues wrapped with Waffle, the choice would be a very simple one.

Oh and https://twitter.com/HackerNewsOnion/status/98160924222131814...


Not suprising considering how terrible their products are. You don't need to benchmark them to know they're slow.


I have a simple heuristic: if the company bans benchmarking, then the product is slow (and they know it).


I'd also add that Atlassian are a well-known user of immigrant labour under an Australian visa scheme similar to the H1-B. They are hugely pro-immigration publicly also. (Despite this, their founder just bought a huge mansion in a very expensive part of Sydney completely protected from population growth due to restrictive bylaws).

I'd propose that they are hiring cheap workers just to keep the ship afloat, rather than to radically improve it.


I’d propose that you’re wrong - with relocation and visa costs it’s significantly more expensive to bring in someone from overseas. Not to mention it’s frequently 3-4 months after interviewing before they can start, as opposed to 3-4 weeks for a local hire. We hire as many local people as we can, the interview process isn’t any different.

The other aspect of the founders believing in the importance of being able to hire talent from overseas is actually at the higher levels - the local talent pool of managers with 10+ years experience running large SaaS orgs is pretty small, given the industry here is fledgling. We’ve developed our own leaders internally, but there’s no substitute for bringing in an external hire to develop the next generation of leaders.


Recently there was some discussion about how restriction like these are void in the EU, see https://news.ycombinator.com/item?id=18064772

I always wondered how or if it is possible to place arbitrary restriction on software use.

Also I wonder if a clause like this would be binding for tech journalists who run a benchmark because essentially they don't really agree to the license when they are testing software.


> Cloud terms Para 3.3

> Except as otherwise expressly permitted in these Terms, you will not [...] (c) use the Cloud Products for the benefit of any third party;

What does that even mean ? "benefit" is such loose language. Can I not use JIRA to build anything that 'benefit's my customers? Can someone with experience working on such terms throw some light on this ?


This is hilarious.

At one point I was writing test automation against their JIRA Cloud offering, because they didn't provide an analog to their authentication API in the JIRA on-prem version.

To get the tests to pass, I had to create a jiraRetryFixture and when that didn't work I wrote a preflight check which would just skip those tests if it wasn't available.


Slow/frustrating UIs can be one of the biggest barriers to productivity.

At Monolist (https://monolist.co), we’re building a streamlined task experience that integrates deeply with Jira (and Confluence) Cloud specifically so you don’t have to deal with these painful UIs.


What's a decent alternative to Jira for support/workflow (non-software)?


Is Atlassian still a thing? Why?


No serious alternative to jira (if you have one, I would love to know about it).

Hard to find a decent alternative to confluence for an internal wiki, particularly one with a good ACL system. We need certain customer details locked to just the people servicing those accounts.


Jira's primary user demographic is by its very nature capable of building a replacement for it.

Build a bare bones system just like Git itself that keeps track of the data and ACLs, and let the silicon valley startup guys make fancy web interfaces and cloud packaging to make it palatable to middle management.

Like Git the core can be moved around and interacted with in the terminal so you're not dependent on any one vendor, and if you don't like any of their GUIs you can just work on the console.


Youtrack


thank you


Statuspage.com seems to be doing well


[flagged]


Could you please stop posting unsubstantive comments to Hacker News?


The HN title is misleading: the terms do not forbid benchmarking, just public dissemination of the benchmarks. This is a standard software EULA clause.

One reason we decided to keep this clause in the Caddy EULA (which I should clarify here only applies to official binaries, not the open source, Apache-built binaries you can make yourself) is because we found out that very few people are expert enough to benchmark correctly. I've read a dozen Caddy benchmarks, for example, that turned out to be based on false assumptions or had hidden factors or were simply not reproducible (and not just by me).

Benchmarking requires expertise that, it turns out, very few people have. I don't think I even have enough skills to do it correctly and meaningfully.

Also, web servers are complex enough (in terms of both configuration and all the layers involved with networking stacks) that one correct benchmark is not generally useful to the next person.

Spreading wrong performance information can hurt a business. It's not that there's anything to hide or any desire to take away your freedom -- and I would normally be one to assume the worst from any large company -- it's just business: they don't want the risk of bad PR based on a possibly false premise, especially when that information tends to only create negative hype rather than actually being useful.

Anyway, this link doesn't seem like news. Just usual HN hype.


>> The HN title is misleading: the terms do not forbid benchmarking, just public dissemination of the benchmarks.

If a tree falls in the forest, does it make a sound?

The title is not misleading. Performing 69,000 benchmarks but being unable to publish them is de facto banning benchmarks, period. Hiding behind the idea that benchmarking is hard therefore it should be banned because it might be fake news is ludicrous.

It is also not standard software EULA licensing, unless you think Oracle's practices are somehow industry standard and good for everyone.


This way they can make you spend a ton of time bringing up a trial version on your internal systems to do the benchmarking. Effectively forcing lock in and sunk costs fallacy thinking then put you into tons of meetings that are really just attempts to upsell.


Honestly, if I had widely used software with an enforceable EULA term which allowed me to benchmark but not publicly disclose bad results I found, it would make for even worse PR for the company: I'd be able to go to a tech industry reporter saying "I ran benchmarks on this software, and I think many people would be interested in the results, but the company forbade me to release them publicly. I will still share them non-publicly with any interested parties under NDA." Or if private dissemination were also forbidden, I'd change the wording accordingly.

The better way for a company to handle this concern, if they feel it's important, is to proactively run and release benchmarks including commentary on the results, together with everything necessary for anyone to reproduce their results. Even better if they fund a trustworthy neutral third party to do this instead, with proper disclosure of the funding.

They can then respond very effectively to bad PR about badly done benchmarks. Unless their performance is actually bad, or course.


That's certainly how things should be. But this clause was one of Oracle's early innovations (https://danluu.com/anon-benchmark/) and they did pretty well. Do we need to understand how they got away with it to have a good chance of changing the norm?


Question for any lawyers - can they gag any benchmarks that are published against the terms? I'd imagine the license terminates - what kind of damages can companies realistically extract when you break your license agreement?


> It's not that there's anything to hide

Clauses like these make me think you think there is. Which is perhaps a bigger red flag than the attempted censorship in itself.

> Spreading wrong performance information can hurt a business.

Firstly, I'd suggest you should perhaps focus on trying to educate your users on how to make better performance tests - if they are bad at making benchmarks then they are likely bad at running your server as well.

Secondly, boohoo. Not spreading unflattering but correct performance information might not hurt your business but will hurt your customers.

Lastly, you are curtailing speech which isn't ethical and due to that I'm pretty ambivalent if any hurt visited your business. Imagine if everyone did that about everything.

> complex

> requires expertise

I'm sure every despot anywhere used a variation of your argument, something about economics being complex offshoot of mathematics that requires expertise to handle so you best not share any overly rushed opinions.

> Apache-built binaries

Which people can benchmark to their heart's content and I'd hope the clause would irritate many people into publishing their own benchmarks..


I think you should also include restrictions in your EULA to prevent people from publishing statements about your code quality too. Very few are qualified to measure it, and spreading potentially false information can hurt a business. I can think of several other items that have limited expertise in evaluating and could cause negative PR harm if done falsely, so you probably should just enumerate those in the EULA too.

Also, I gather from your comment in support of these restrictions, they are more than boilerplate and you will pursue offenders.


I think car companies should prevent people from publishing safety or performance analysis of their vehicles. And restaurants should disallow public reviews of their business. (Very few people have the culinary knowledge to properly and objectively assess a restaurant's quality.)


Nope. You don't need to protect us from people who do a bad job benchmarking we can do that ourselves. If we read their methodology and disagree we can make up our own minds. Anyway my experience is terrible with Atlassian's services so without benchmarks I can just continue to assume that is true forever and always recommend against them when we're making infrastructure decisions. I guess I can thank you for a big bundle of products less to consider.


I get your point, but I also think you're wrong about it only being harmful.

Even if the review is done incorrectly, it's a data-point on a misconception your users have, and it gives you a chance to respond accordingly.

The company I work for has to deal with performance complaints all the time, some very from very public and loud entities in our Market. A company deciding to move away from our product signals very clearly across the market and it shows in our renewals numbers -- yet, we've never considered saying "don't benchmark us" in our contract.

Every single time these complaints kick up, we treat it as a chance to prove that we know our stuff, we reach out and offer to assist with re-evaluating, and explain our position, and most of the time, it works. It also shows a big public commitment to helping instead of just hiding behind our Legal Team, and it shows we know what we're doing.

We're not a huge company by any means in terms of actual persons able to act, but we still make it work without the need to put out such terms. If your customers are frequently doing something to make your product less than ideal, then you have an education issue that needs to be resolved.


> Spreading wrong performance information can hurt a business.

Spreading correct performance information can hurt Atlassian's business. That's why they don't want us to.


I'm surprised and disappointed by this stance. Other people in this thread have mentioned that such a clause isn't even legally enforceable, so I'm curious if you have any perspective on that.

Ultimately, though I appreciate you sharing your perspective, I personally will be steering clear of caddy and your other work.


Have you ever invoke that clause to demand that someone take down benchmarks of Caddy?

Do you believe that clause in the Caddy EULA is enforceable in the United States under the Consumer Review Fairness Act?


For me if benchmarks are forbidden, it means that software is significantly worse than its competitors. So this clause by itself is worse than any benchmarks. There might be exceptions like Oracle, but not everyone is Oracle.


> Benchmarking requires expertise that, it turns out, very few people have. I don't think I even have enough skills to do it correctly and meaningfully.

Very important and often overlooked point.

But I wonder, why not forbid public dissemination of inaccurate, non-reproducible benchmarks?

> Spreading wrong performance information can hurt a business.

Wouldn't that be libel? (IANAL)


Expertise so rare it seems Atlassian themselves don't have it.


The problem is when the benchmark is accurate, reproducible, and based on a completely nonsensical scenario. It's not libel to say "Jira is 50% slower than Bugzilla" when your benchmark serially creates 25000 tickets, but it's not a fair claim either.

That's not to say that anti-benchmarking licenses are the right solution, of course. I can just sympathize with why Atlassian wants one.


Suppose you state that a product performed poorly on measurement Y of benchmark X. The problem is not reproducibility or accuracy, it's that either Y or X are always really stupid in some way or another.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: