Hacker News new | past | comments | ask | show | jobs | submit | rleigh's comments login

They are both part of the United Kingdom (which is the union of the kingdoms of England and Scotland).

Great Britain the name of the island both are located upon. England and Scotland will always be a part of Britain, because that's a geographical area, not a political one.


>England and Scotland will always be a part of Britain

Not necessarily: they could dig a big canal across the island along the border.


Yeah. It really does take more effort to leave perfectly working functionality in place than to do the work to remove it.

Come on. Many of the decisions made by the GTK+ developers are utterly unjustifiable.

Take the removal of GtkHBox and GtkVBox in place of GtkBox. A couple of shims for backward compatibility would have taken just a few lines of code and would have avoided a hard compatibility break. Did that happen? No. So a huge burden to update every GTK+ application (incompatibly!) was imposed upon every developer on the planet. Was that a productive use of resources? No. It was an utterly illogical change which had zero real-world benefit to anyone. And that's just one of many, many bad decisions.

There are very valid complaints to be made about GTK+, and you can't handwave them all away with some PR comments.


Couldn't you just make your own macros?

    #define GtkHBox(x) GtkBox(x, GTK_ORIENTATION_HORIZONTAL)
    #define GtkVBox(x) GtkBox(x, GTK_ORIENTATION_VERTICAL)
I thought the change to a standard GtkBox made sense given the way the system was built. This made both kinds of boxes the same class, avoiding an unnecessary layer of inheritance.


Yes, you could.

But this needs doing in every single downstream user of GTK. Far more efficient to do it once, in the toolkit itself.

This is why sane people don't use GTK. The maintainers literally couldn't care less that their 2 hours of "work" doing the removal causes hundreds of thousands of man-hours of refactoring on the part of others, plus the testing and validation work to prove that every part of their applications are still working correctly.


No, that's incorrect. It's not more efficient at all, and even if it was it would still be less efficient to do it in the toolkit instead of in a separate shim. I explained this in detail in my other comment.

Also the testing and validation always needs to happen irrespective of where the shim layer is shipped, because this is still a new major release of a relatively large library. Having a shim for one widget out of many isn't going to meaningfully reduce the need for testing.

If you could mention what these programs are that are somehow completing the rest of their GTK4 ports in a small amount of time, refactoring everything to use the new improved widgets and rendering system at incredible speed, but are taking hundreds of thousands of man hours to replace the word HBox with Box, maybe someone can look at them and advise them on how to do that faster.


You're trying to argue that water is not wet. This is seriously bad-faith argumentation.

Having a compatibility shim in GTK and testing that, once, would save any downstream user from having to do any work at all for this change. This is obvious and self-evident. Yet you seem to think that it's acceptable to impose this upon every user. It's this disrespect for end-users' time and resources which lead to people such as myself abandoning GTK entirely, when it was abundantly clear that it would not, and could not, ever be fit for purpose when you have this type of attitude prevail. Professional library maintainers do not break backward compatibility in such a cavalier manner, particularly when there is zero material net benefit of any sort arising from the change. Why would a project deliberately cause such breakage when it didn't have to. It's because it cared more about making a "cleanup" than it did about breaking it's entire userbase. It's cosmetic at best, and it could have been implemented without any break at all with a bare minimum of effort. That's the real kicker. The change could have been made without any compatibility break. And that just shows a complete lack of care.

Bear in mind also that Gtk*Box are foundational container widgets. Every application of any serious size will likely use hundreds or thousands of them. And no upgrade path for Glade/GtkBuilder XML either. That all needs hand-updating too. And this is just one example of breakage. You have to multiply it by all of the others, too. The ongoing burden of unnecessary and unproductive work repairing breakage arising from API churn is extraordinarily costly. Plus, it also breaks compatibiity of our application code with older GTK versions, which we might well also need to support in parallel for years. None of this adds value to our application, it's all cost.

You've spent pretty much all of your comments here deflecting and prevaricating. You've not once shown any concern of any sort for the actual real-world problems which have been imposed upon others, and which are genuine deal-breakers for actual application developers who have tried to use GTK for serious commercial work. The exact same lack of concern and understanding which the GTK and GNOME developers have shown all along. And I'm not new to this. I've used GNOME since pre-1.0 and developed with GTK+ since the 1.x and 2.x days. I was using GTK+ for commercial products two decades ago. It was barely viable then, and it's many times worse now. The primary concerns of these libraries should be API stability and implementation quality, and they have repeatedly failed at both.

If GTK wants to be considered seriously, it needs to behave seriously. And you need to actually listen and understand what people are telling you.


I just realized I forgot to mention this before: that shim actually did exist for about 10 years. It was deprecated in GTK3 and then finally removed in GTK4. If 10 years is too short a warning to give for removing a deprecated API, and this offense is apparently so bad that it ruins the credibility of the whole project, then I really don't know what could be expected of the maintainers.


>Having a compatibility shim in GTK and testing that, once, would save any downstream user from having to do any work at all for this change.

No it wouldn't, the downstream users would still have to test. Because in practice there are a lot more changes that need to be made than just the names of those widgets. This is one of those spherical cow situations. It would theoretically save time if apps only used the box containers and never called any methods on them but that's not how real apps are actually built.

>Why would a project deliberately cause such breakage when it didn't have to. It's because it cared more about making a "cleanup" than it did about breaking it's entire userbase. It's cosmetic at best, and it could have been implemented without any break at all with a bare minimum of effort. That's the real kicker. The change could have been made without any compatibility break. And that just shows a complete lack of care.

No, this is all wrong. The container widgets were refactored and heavily simplified in GTK4 to make the API easier to use and maintain because the class hierarchy was getting too deep and complex. Along with that they changed the names because there was a break in the underlying APIs anyway so it was a perfect opportunity to simplify the naming as well. It would not have helped at all to make such a tiny shim, that wouldn't even cover the most basic use cases. Like I already said the shim would have to be much larger to be anywhere close to being useful.

>And no upgrade path for Glade/GtkBuilder XML either.

No, there is an automated converter for the XML.

>And this is just one example of breakage. You have to multiply it by all of the others, too. The ongoing burden of unnecessary and unproductive work repairing breakage arising from API churn is extraordinarily costly. Plus, it also breaks compatibiity of our application code with older GTK versions, which we might well also need to support in parallel for years. None of this adds value to our application, it's all cost.

Then by all means, don't update the GTK version. The reason to upgrade is if you want the new features in the new version.

>You've not once shown any concern of any sort for the actual real-world problems which have been imposed upon others

Actually I just asked for some examples of real-world programs that are having this problem, if you could post the repositories then we can talk about them.

>And I'm not new to this. I've used GNOME since pre-1.0 and developed with GTK+ since the 1.x and 2.x days. I was using GTK+ for commercial products two decades ago. It was barely viable then, and it's many times worse now.

Unclear to me why you've been using GTK for 20 years if it's really that bad.

>The primary concerns of these libraries should be API stability and implementation quality

No not really. The developers can choose that concern but they don't have to. Some projects focus on stability, some focus on getting more features out the door, some focus on other things. I can't explain everything about GTK's decisions but I know that like most open source projects they have to make decisions that encourage certain types of contributions, sometimes that means they have to trade stability. And that also means if you see something that's low quality that you can fix easily then you should start contributing or fork the project, instead of demanding that the maintainers do it for you.

>If GTK wants to be considered seriously, it needs to behave seriously. And you need to actually listen and understand what people are telling you.

I'm not a GTK maintainer so I'm not even the person you would need to convince here. But I am listening to what you have to say, that's why I can tell you with confidence that it wouldn't have helped to make that type of shim.


Sorry but that's a really poor example and IMO not a valid complaint. I don't think they made the wrong decision there. There are quite a lot of other API changes in GTK4 so some tiny shims for only a couple APIs wouldn't help in porting at all. It would only create confusion because there would be two APIs for the same thing but now it's even more unclear when the old API is going to be removed. The argument you're essentially making here is "keep deprecated APIs around forever" which isn't realistic. They're deprecated for a reason, if you never remove them then deprecation isn't meaningful anymore.

Also you're incorrect that it would be "just a few lines of code." Those things are GObject classes which can be referred to in various ways through the runtime system or by language bindings, it's not just a matter of creating some aliases for C symbols. If you only use those in such limited ways that a tiny shim would do the job, then it would be just as easy and more beneficial to create a small script that does search and replace on your entire project.

It would be entirely possible to create a larger shim to ease porting, and keep it outside the main project so it doesn't cause confusion. But for it to be truly useful, someone would have to put a lot of thought and effort into making it work for a good portion of the APIs that changed. Then it would have to be tested thoroughly with all the language bindings. It's a way bigger project than just shipping a couple of #defines in a header. And if it did exist, it too would get deprecated and obsolete at some point when all the apps finish their ports to the new version. None of this is a new idea -- all this I'm taking about is exactly what Qt already did with Qt5Compat. It could be done in GTK as well but some interested party needs to make it happen. So far, no one has cared enough to put their money where their mouth is and actually do it.

BTW even the Linux kernel does a thing now where they don't use deprecation attributes in the code at all anymore. If a kernel developer intends to remove an API then they just delete it and fix the build. Because in practice it's actually much worse to keep an API around for long past its expiration date and annoy everyone with deprecation warnings.


Your comments show that you have a close understanding of technical details and processes used in the Gnome project. I believe you should come clean about what your relationship with the Gnome project is. As other have hinted, you do not seem like the impartial external only-just-a-user you claim to be. You clearly have an agenda.

You only registered an account yesterday and have only commented in this thread. Most of your replies include comments like the following:

* This seems to me a very bizarre request.

* That doesn't make sense.

* See, I think now you are being too overly dramatic.

* Perhaps that's proof that it isn't as bad as the vocal minority says it is?

* It's odd you say those things...

* Sorry but that's a really poor example and IMO not a valid complaint.

* Please avoid this narrative.

* You're disrespecting yourself and the readers of your comments by making these kind of hyperbolic statements.

* The issues you mention here are mostly not relevant anymore...

* Speak for yourself please...

That shows a pattern. You seem to dismiss everything everyone else is saying. Considering the history of attitude of Gnome developers towards users and their requests, this leaves little doubt of your connection.


I don't have any relationship or agenda. If you really want to know, I had an account here long ago but I received some very rude, hateful and harassing comments so I stopped posting and then lost the password. Is it that much of a stretch for you to believe there's a GNOME user who isn't angry at the developers and doesn't share your opinion? I don't get my FOSS news from social media. I read the developer's blogs and announcements directly and I don't assume they're lying or trying to hide some secret evil agenda, perhaps that's why you see contrasts between my attitude and the attitudes of others?

>You seem to dismiss everything everyone else is saying.

But disagreeing with something is not a dismissal. In cases where I disagree I'm careful to state the exact reasons why and discuss, or present some facts or explanations that someone may have overlooked. That's how to keep the discussion engaging even if you disagree. I'll only dismiss someone if they're intentionally rude. And this comment is against this part of the guidelines: "Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken." I don't have any way to advance the discussion when you do that, it's personally attacking and putting me on the defense when I haven't done anything but state my opinion. Can't you see how that becomes a way to systematically shut down discussions and make them hostile, when someone constantly throws those accusations at strangers?

>Considering the history of attitude of Gnome developers towards users and their requests

There is no free software project anywhere that is obligated to honor any user's requests. If you have a problem with this, you should not use FOSS. But if you absolutely need someone to honor your requests, you need to pay them and get the contract in writing so they're legally required to do so.


The Linux kernel changes internal APIs, and whoever changes it gets to fix the kernel code that relies on it. The end user of the kernel -- user space -- never sees any of this, and the userspace API never breaks.

You're implying this is equivalent to GTK and Gnome intentionally breaking API with every major release for every applicaton that uses those libs. It is not. Frankly it's a bad faith argument.


>The end user of the kernel -- user space -- never sees any of this

I mentioned it because despite that difference I don't think it's practically much different from a developer perspective. Big changes are still about as organizationally difficult for kernel developers to do. If someone wants to deprecate something that a lot of other people are using and it's a lot of work, then they still have to convince everyone to go along with it, get them onboard with the new API and help out with removing the old API, etc. That's the actual hard part, sometimes it can be made easier by providing a shim but often it's not.

>the userspace API never breaks

This is an aspirational statement, not a rule. It has been broken lots of times. The userspace API in Linux is not just the syscalls. Effectively it encompasses every single thing a driver does and exposes in some fashion to userspace and a number of other things as well. Whether that be ioctls or other non-standard interfaces exposed by block devices, sysfs entries, procfs entries, other pseudo filesystems, netlink events, configuration files, low level userspace libraries like libselinux and libseccomp et al that technically aren't part of the kernel but the kernel developers encourage everyone to use them anyway, util-linux and other utilities of that nature, you get the picture. This stuff changes all the time and it's not even possible to keep it all stable forever because it's such a massive amount of code.

>You're implying this is equivalent to GTK and Gnome intentionally breaking API with every major release for every applicaton that uses those libs. It is not.

Yes you're correct that it's not exactly the same but I'm implying the exact opposite: The GTK and GNOME changes are actually much less of a problem! You can have many versions of those libraries installed at the same time. You can't easily use many different kernel versions at the same time.

>Frankly it's a bad faith argument.

It's against the guidelines of this site to make this kind of statement. And quite frankly it's very uninteresting to respond to. You can make your point without this.


No. Why should anyone avoid discussing what happened?

We can all see with our own eyes how much GNOME cares about collaboration and interoperability with others. It's zero. It's been this way for a very, very long time. And that disdain for everyone else has consequences.

I used to develop GTK+ applications. I no longer do. Because it was an absolutely miserable experience, working with a toolkit which repeatedly requires every application developer to down tools and do a lot of busy work rewriting perfectly working code when APIs are changed or deleted. No other GUI toolkit causes so much pain and disruption to their userbase. It's quite clear that there is no regard for the actual needs of real application developers, and people like yourself aren't helping. You can't defend the indefensible.


You shouldn't avoid discussing what happened, I'm saying you should avoid making unfounded bad faith accusations.

>We can all see with our own eyes how much GNOME cares about collaboration and interoperability with others. It's zero

I mean, the blog post disproves this entire accusation by listing a bunch of projects they collaborate with. This is what I mean: please be more careful with your words. You're disrespecting yourself and the readers of your comments by making these kind of hyperbolic statements.

>which repeatedly requires every application developer to down tools and do a lot of busy work rewriting perfectly working code when APIs are changed or deleted. No other GUI toolkit causes so much pain and disruption to their userbase

GTK isn't the first or only project to deprecate and remove APIs, Qt does is it in every new version too. And you don't have to do this unless you're upgrading to new versions. Some projects are still using forked versions of old Qt and GTK for these reasons. That's totally something you can do.


They don't really define "complexity" well enough. CMake is more complex internally than the autotools in terms of the functionality available and the internal implementation details. But it does much, much more than the Autotools ever attempted. However, the user configuration is simpler and more powerful, and the generated build logic is simpler and more flexible in terms of the output formats available. And it only requires learning a single language, not three or four.

You're absolutely spot on with regard to its maintainability though. The Autoconf macros can't be changed without breaking everyone, so it's effectively impossible to change it, and we've seen that stagnation for well over a decade at this point. The use of m4 adds a whole heap of incidental complexity and inflexibility. You can never replace m4, because of the huge number of Autoconf functions internally-expanding user-provided macros.

I'm unsure why they make such a big deal of configurable install prefixes and system feature testing. While Autoconf may have pioneered this, or at least popularised it, it's not like the alternative systems like CMake don't implement the exact same features. They do, though with slightly changed names (just cosmetic).

Where I think they have a point, is in the huge investment into the Autotools by projects and the inertia that creates to resist change. That's very real. However, I do think that the cost of change is often overestimated. Off the top of my head, I've converted four open-source Autotools-based projects to use CMake (schroot, libTIFF, Apache Xerces-C++ and Apache Xalan-C++). Some of these had very complex logic encoded in their configure scripts; some of them had libraries of Autoconf macros to do custom feature testing. And yet, while rewriting this logic to use the CMake equivalents was tedious, it was entirely doable, and in all cases it was doable in under a week with a bit of shakedown afterwards to eliminate subtle behaviour differences. For two of these (libTIFF and Xerces), the projects maintain both systems in parallel to this day.

The inertia will keep a lot of established projects using the Autotools for several more years I would expect. If it works, the incentive to change is not going to be there until the Autotools bitrot to the extent that they break with contemporary toolchains. It will happen at some point. We already see it occasionally with libtool, and I've already seen the Autotools builds I maintain fail on Solaris, Cygwin and MinGW where they worked without any special effort with CMake.

Every piece of software is a child of its time. The Autotools were created to solve Unix portability problems of the '90s, and they served that purpose well using the portable and commonly-available tools of that time. However, while these tools continue to work today, they have not evolved to solve contemporary problems and that is why they aren't going to be a good choice for new projects. It will die by attrition if nothing else.


While using a setuid binary to edit the password/group "databases" is the historical default, there's no real technical reason why it must be that way. The passwd program could communicate with the database service via a socket. Likewise the NSS and PAM stuff could communicate with the same service via a socket. No reason for it to be lots of in-process loadable modules in this day and age.


Confocal and multiphoton microscopy were the staple of my Masters and PhD research. I was hooked from the first time I got to operate one, the sheer beauty of what you can capture in true 3D volumes is on another level from conventional photography and brightfield microscopy. Multiphoton lets you live image cells and tissues and directly visualise the dynamics of how they move about and change over time, which gives you even more insight into the function of biological systems. Absolutely incredible technologies.


I think the cost and complexity of reproducing work is somewhat overestimated, as is the specific expertise of individual researchers, though maybe your field is exceptional in this regard.

Primary research, pioneering new techniques and equipment to explore the unknown, is time-consuming and costly and requires a lot of original thought and repeated failure until success is achieved. However, reproducing that work doesn't involve much of this. It's taking the developed methodology and repeating the original work. That may well involve expensive equipment and materials, and developing the technical expertise to use them, but that does not involve doing everything from scratch and should not take anything like as long or cost as much.

I also believe that we far too readily overestimate the specific special skills which PhD students and postdoctoral researchers possess. Their knowledge and skills could likely be transferred to others in fairly short order. This is done in industry routinely. A PhD student is learning to research from scratch; very little of their expertise will actually be unique, and the small bit that is unique is unlikely to be difficult for others to pick up. I know we don't like to think of researchers as replaceable cogs, but for the most part they are.

My background is life sciences, and some papers comprise years of work, particularly those involving clinical studies. However, the vast majority of research techniques are shared between labs, and most analytical equipment is off the shelf from vendors, even the very expensive stuff. Custom fabrication is common--we had our own workshop for custom mechanical and electronic parts--but most of that could have been handled by any contract fabricator given the drawings. And the really expensive equipment is often a shared departmental or institutional resource. Most of the work undertaken by most of the biological and medical research labs worldwide could be easily replicated by one of the others given the resources.

Depending upon the specific field, there are contract research organisations worldwide which could pick up a lot of this type of work. For life sciences, there are hundreds of CROs which could do this.

As one small bit of perspective. In my lab a PhD student worked on a problem (without success) for over a year. We gave it to a CRO and they had it done in a week. For less than £1000. The world is full of specialists who are extremely competent at doing work for other people, and they are often far more technically competent and efficient than academic researchers.


1. I use it on a NAS system. For years this was vanilla FreeBSD from 10 to 13. A few months back I replaced the system with TrueNAS Core which is based on FreeBSD 13, retaining the ZFS pools from the original installation. This system hosts storage and network shares, services hosted in jails such as databases, build slaves and artefact storage, and Windows virtual machines also hosting services and remote desktops.

2. First-class ZFS support, full NFSv4 ACL support which works with Samba and Windows ACLs and is a massive improvement upon POSIX.1e DRAFT ACLs.

3. No real preference, but ZFS support is (for me) the killer feature. Next are jails and Bhyve.

4. The main lack is the more comprehensive selection of drivers found on Linux. That said, it's pretty decent and I do find the overall quality of the drivers and system as a whole is better than Linux.

5. The system is engineered as a cohesive whole. While other BSDs might be similar, and perhaps Debian was attempting this a couple of decades back with its core design principles, most of the alternatives are lacking in this essential cohesiveness.


You can do a zfs send/zfs recv to send a dataset (including all snapshots) to yourself which is effectively rewriting the whole dataset, including all of its history, by duplicating it.

Not hard, but it does require sufficient free space. Once it's done you can destroy the original dataset and reclaim the space.


QNX managed it! Drivers run in separate address spaces as, does the kernel, making it very robust. It's used for safety-critical applications.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: