Hacker News new | past | comments | ask | show | jobs | submit | more segf4ult's comments login

That idea sounds fantastic! I would definitely be interested. I'm sure I would end up spending a significant amount of time on the site.


That's weird. I'll look into a fix.

Edit: I couldn't reproduce this with Chromium on Ubuntu 16.10. I might set up an Arch Linux box to see if that makes a difference.


It happens on my Firefox 52 on Debian. I also have ublock origin installed.

Maybe you use some strange fonts?


While I agree that Anna Kendrick might not need a biography at 31, she definitely isn't Q list. Probably A list, maybe B if we're being picky.


It's mostly because good UX testing costs a ton of money and these projects are mostly created by pure developers and not designers.


I'll bite.

The same could be said for programming: it costs a lot to produce code. In fact, if you run sloccount on Inkscape you end up with about 470.000 lines of code. Whatever way you slice it: that's a considerable effort. Yet people have contributed that for free.

Does this mean that UX-designers aren't interested in working on open source projects? Or does it mean that it is hard for them to collaborate with open source developers? If so: what are those problems?

The reason I'm wondering this is that if I were a UX designer wanting to make a name for myself, making some piece of open source software UX really good would seem like the best possible way to promote myself.


I think it's a slightly different issue. A UX designer usually does not have the ability to run a project independently from the beginning. A programmer can, even if it results in a crappy UI.

Successful open source projects don't usually start with teams. They start with programmers with an itch to scratch. It's hard to get investment in a group of people. Everyone is always keen at the beginning, but drift away as the realities of the daily grind kick in.

So if you are non-programming UX person, you are dependent on a programmer who might not stick around. Then you are shopping around for programmers (which never show up). If you are a programmer who wants a UX specialist, the one you start with usually wanders off and then you are shopping for a new one (which never shows up).

The result is that programmers develop a culture of building these kinds of apps and UX designers don't. Even if a UX designer shows up late in the process, they now have to fight all the programmers who have gotten used to making the decisions. And one of the perks of running your own project is not having to listen to other people. If you want to do what other people tell you, you can get paid a lot more ;-).


I have a theory. There's a much weaker positive feedback loop. If a commercial product is nicer looking and easier to use, more people will buy it so there's more money to fund development. Management will demand redesigns and improvements because that creates more revenue.

With open source, the feedback loop from users to developers is much weaker because it's not powered by money. It turns out changes in cash flow are a valuable signaling mechanism. I think this is an issue with many aspects of Open Source development, not just UI.

I'm not in any way saying this is an unresolvable issue or that it dooms OS projects, or anything like that. I just think it's a factor.


My two cents:

- UX designers that are actually good at their job seem to rarer than good programmers

- Hence the pool that would or could contribute to FOSS is smaller

- Getting UX changes in is likely something that maintainers don't really like, because it's harder and more time consuming to review (eg. the code you can just look at, but to review a .XML UI you need to apply patches, compile/build, play with it etc.), also UI changes will almost certainly require a bunch of code changes, so if you're changing a ton of lines to "improve UX", then it's not really a great incentive for a maintainer to invest his time into merging that (because it's "just UX").

UX also tends to be opinionated, the guy who wrote the interface originally probably has no issues using it himself, but is probably somewhat opposed-by-default to changing it (works for me™).

Also, if the UX human isn't also a good coder she probably needs to find someone who works with her to make the necessary code changes to support her UX changes, which would be quite challenging in many projects. (Why would I invest a lot of time to work with someone - that I don't know, has no reputation in the project - on something where maintainers will probably not give some sort of "yeah we'll merge that"-waiver?).

- Good UX has a lot to do with consistency. So fixing up an inconsistent project will mean tons of changes, exacerbating all the issues above. It might be as big a change as porting to a different language or framework. How likely is maintainer acceptance for such changes?

So unless there is a process for doing UX in a project, and all the project maintainers back the process, and the process is actually somewhat sane, it's unlikely that it'll work out. This might mean things like having an UX review queue and UX review for new interfaces. Imposing strict UX guides on code/feature contributions might also deter contributions. So you also have to balance these things.


«to review a .XML UI you need to apply patches, compile/build, play with it etc.»

That's also assuming that you have an XML or designer friendly UI toolkit at all. A lot of Open Source is built with GUI toolkits that use essentially 100% code and don't have good designer-friendly tooling (because the programmers were busy scratching other needs first). That definitely narrows a lot of designer input to designers that also can code.

A lot of people give Electron flak, but if there is one benefit to Electron becoming more common in the wild for open source projects, its the relatively much more designer friendliness of HTML/CSS(/SVG/etc) as opposed to classical C++ macros and duct tape UI toolkits.

For that matter, and bringing things a bit more on topic, I'd think Inkscape could be a good application to "dogfood" their own tool and own code a bit further and build more of their UI/UX elements in SVG itself. (Maybe even in a way that others could piggy back off of for future applications.)


"Don't have good designer-friendly tooling (because the programmers were busy scratching other needs first). That definitely narrows a lot of designer input to designers that also can code."

This is essentially the case with GTK+. Glade, though it has potential, is practically useless in it's current state. I've searched far and wide to find a mature project using it - so I can learn how to use it on the next level - and came up empty handed. There is no next level.

As far as it goes with GTK+, the only people designing UIs are programmers. It's all either hard coded in C or Vala (or sometimes C++), or defined in XML using tons of features that aren't implemented in Glade at all, and require an intimate knowledge of the GTK+ API.

As a free software enthusiast trying to work on a GTK+ application, I'll be the first to say, things could be (need to be) much better here! Beyond a couple trivial examples, the reference, and a couple dated books, documentation is extremely scarce. The tooling is incomplete, and it seems that the majority of people working in GTK+ have either grown used to working like this, or moved to another UI stack. Making GTK+ programming more accessible doesn't seem to be a priority (and adding more language bindings, or entirely new languages don't help nearly as much as a couple solid books on GTK+/GTKmm and a first class Glade would).

The only way to really learn the ropes seems to be by meticulously dissecting dozens of free software projects. No one will ever invest their time do this commercially when they can hit the ground running with Qt. It only seems remotely possible for stubborn people who write software as a hobby, and people who have matured to the point they can sell their knowledge. For a UX designer to end up here, they would have developed much more valuable skills in the process.


Two things:

- Consistency. A good UI requires consistency throughout, which requires a good understanding of the entirety of a piece of software, and a willingness to work at a high level. For open source packages there’s rarely a UI standard to hold individual problems up against.

- User testing. UX design doesn’t work in a vacuum, it requires user testing to be effective. This costs a lot of time and effort to organise, and isn’t something you can hack on for a few nights. It’s possible to to A/B testing to gain data of course, but then you have to persuade your friendly developers to implement a testing framework and have some place to store data while waiting for the results to analyse.


The easiest way to get consistency is to have a single person in charge of the UI. It doesn't need to be a UX designer specifically, just someone with a strong sense of style.

For a contrary opinion on the cost of usability testing, see "hallway usability testing" at https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...


Part of good UX is consistency and getting that kind of requires a consistent team or an all-knowing overlord. My guess is that's why it's hard to get to and keep a good UX in OSS. Once the UX is good there shouldn't be much work for a designer to do and they're not gonna sit around, not getting paid, just to make sure someone doesn't mess it up, that is boring.


I'll add another potential reason - culture. Many (most) developers grew around open source, or at least interacted with it in significant ways during their (work and/or hobby) career. Many if not most of the tools they (we) use for programming come from open source too. So we're kind of used to this environment of collaboration. You see a project you like and have some spare time, it's natural to contribute.

I doubt UX specialists live in such a culture. I might be wrong, but from my experience with artists and designers, it's much more of an individualistic culture with stronger commercial interest. I.e. "I do it for myself" and then "want my work - pay me".

If that is true, then there's simply much less UX specialists with spare time and interest in open source than there are such developers.


While Apple may not intentionally keep you from leaving their platform, a side effect of their tight integration is that it's harder to leave. Take the Apple Watch for example. Without an iPhone, the Apple Watch goes from a smartwatch to only being able to tell the time. Sure, you could own an Android phone and an Apple Watch, but you wouldn't get any of the benefits of the Apple Watch, which further ties customers to the iPhone.


True, but the Apple Watch is in most ways just an iPhone accessory, and you could make this same argument about any iPhone accessory (though other accessories typically aren't as expensive as the Apple Watch is).


But Apple entered the smart watch market by making an iPhone accessory. Whereas when they entered the smart phone market they made a phone for everyone, not just Mac users. On the other hand IIRC the original iPod required a Mac and they opened that up later.


AFAIK all smart watches require other devices to configure them. Even if you give them cellular connections and whatnot, the screen is just too tiny to do any real configuring on the device directly. But who knows, maybe someday we'll have Apple Watches that can be configured using icloud.com.


Each session would need to be decrypted individually, but if it only took you a few ms per session, you could essentially decrypt as many as you want.


Moreover what we know from the physics of our current general purpose quantum gates (different rules apply for quantum annealers like the D-Wave but these cannot perform Shor's algorithm) it is unlikely it will take more than that to perform a computation. Quantum states in circuits decohere quite rapidly; this is the main obstacle we face in developing them. Chances are we won't have a choice but to do things relatively quickly with quantum computers (there are some possibilities where we need to take time to prepare resources for the computation, but these have the advantage of being trivially parallelizable).

On the plus side, all quantum algorithms will be unable to perform a Logjam-style attack[1] where you do part of the computation once because the same parameters are reused by many servers. You can't copy quantum memory in any useful sense.

[1]: https://weakdh.org/


Which "fresh PC titles" are you referring to?


I guess last Tomb Raider is a fair comparison? Witcher and just released Doom also seem impressive.


And what country are you in where things like this don't happen?


Well I can't speak for many countries but Irish police figures are slightly more flattering with 8 people shot by police in total over 15 years. http://www.independent.ie/irish-news/eight-fatally-shot-by-g...

This is mostly down to how your standard police don't have any firearms at all. So you could say things are at least a few places elsewhere a little different.


From Australia, live in Japan. Having a gun pointed at you by police would not happen in the situation described.


On the other hand, Japan's justice system has a 99% conviction rate. That doesn't happen without blatant, systemic corruption.

I'd rather deal with cops who might shoot me than cops who will definitely get a confession out of me if they want one.


USA has 93%. Both figures include plea bargains. https://en.wikipedia.org/wiki/Conviction_rate


AFAIK a big chunk of the 99% conviction rate is because police will not pursue a case that isn't easy open and shut.


It's very disturbing to me that there are any qualifiers that could make that statistic acceptable. It's like explaining the FISA court's approval rate.


Not very different from the US.

"For 2012, the US Department of Justice reported a 93% conviction rate." -- https://en.wikipedia.org/wiki/Conviction_rate

In the US you can have your cake and eat it too: get shot by cops and have to plead guilty to avoid spending a huge chunk of your life in jail in favor of spending a slightly smaller chunk of your life in jail. America's the greatest country in the world (when it comes to injustice by the numbers).


   Not very different from the US.
It's extremely different. That 93% figure is limited to federal criminal cases. The vast majority of offenses are state and local. This same wikipedia article says, "In recent years, the conviction rate has averaged approximately 84% in Texas, 82% in California, 72% in New York, 67% in North Carolina, and 59% in Florida."

So, you're talking anywhere from (roughly) seven times as frequent acquittals to over 40 times as many.

"Not very different", you say?


No, they only try cases they can win.


They also get to basically torture you for 22 days without charges.


Instead they will just arrest you and torture you into confessing.


What countries do you expect things like this to happen so I can avoid them?


I would say that as an example, in most European countries the relationship between police and citizens is a lot more civilized. But them again this is my impression not sure if it's based on facts. I would sure consider this if I was planing to travel to the US.


You can now buy laptops preinstalled with Ubuntu directly from dell.


you can buy ONE laptop with Linux from dell. and even that one is advertised only to developers.



i clicked your HP link. (because the 76 ones are only desktop computers disguised as laptops, really)

then clicked the laptops link.

then clicked every single one of the models and clicked customize.

every single one of the options were Windows 7, 8, 10. Only.

did you manage to go from that page to any actual product you could just enter your credit card and receive with ubuntu?

PS: i should tell you that i'm not being pessimistic. just realistic. My current personal computer is an Asus eeepc-1000, from 2007 or so. the very first netbook with SSD that shipped with linux. i replaced the asus distro garbage with debian right away, but i got it with linux for the meaning of it.


Because tarsnap is cheap, incredibly well documented, open source, and run by an awesome guy. It's an all around win-win.


Rsync.net is even cheaper, has no requirement for a custom client, and arguably are more dependable because they're not just reselling S3

Edit: not to mention they offer actual support not just "contact the author" email link as a last resort.


I personally just trust Colin's crypto skills more than anyone.


So you're saying you trust a single developer to both write an encryption tool and run the servers it talks to more than the combined possibilities using existing open source tools to create backups by encrypting data locally and storing it remotely via ssh/sftp?


Yes, when it comes to crypto I'd put my in trust in highly talented people over trusting my own ability to glue together a collection of OSS tools anyday.


You seem to have misunderstood me.

I didn't suggest you should write your own encryption tool. There are numerous open source tools for creating encrypted backups, some do deduplication first too.

If the tool doesn't happen to support remote storage, a simple rsync or scp fills that part.

Literally the only thing unique about this service is the use of the term picodollars and the single individual it's all reliant on.


It's the dropbox discussion all over again. We know how that turned out, don't we?


Would you care to elaborate?


When Drew first do a "Show HN" [0] (before it was a thing, actually), there were a lot of response about how it doesn't do anything new that couldn't be already done by a technical inclined person (see the first two top comments in the posts).

To make a comparison with tarsnap, while it's probably possible to do encrypted backup manually with a combination of shellscript and such, there are just too many moving pieces that can go wrong. Where do you store the backup? Someone mentions S3, but even managing backup on S3 with deduplication is not something trivial, and managing the encryption process is definitely not something most of us can say with confident we won't mess up. I can imagine a thousand way that I encrypt something, then unable to decrypt it back.

And then maintenance is also an issue, if I'm using a set of OSS tools, I would have to make sure the tool is being maintained, and to follow any potential disclosure on bug/ updates etc. With Tarsnap, I know I will get an email from cperciva if something comes up.

[0]: https://news.ycombinator.com/item?id=8863


As I already stated I never said you or I or most people should write our own encryption tools.

There are many open source backup tools. They offer a wide range of features such as data deduplication, references/hard links to simulate total backups without copying unchanged files, data compression, data encryption, logging, reporting, remote storage and/or remote sync.

Not all tools offer all features. Not all features work the same way, but there are many options.

Those that don't offer remote storage/sync can be setup very simply to backup locally and then sync/copy to your remote file store of choice - another server, s3, rsync.net, etc

The majority of these tools are shipped as part of Linux distribution repos, so there are almost certainly many more people using them, and multiple people with a vested interest in maintaining them.

And for reference, I agree with the comment(s) about Dropbox. The only difference is that they offer a more intuitive GUI which so far is lacking in open solutions.


I didn't mention writing encryption tools, I was simply saying that plugging all the available tools to use together is non-trivial.

Tarsnap to encrypted backup is what dropbox to file syncing (to a certain extent, obviously). I can understand why for someone knowledgeable like you, the benefit isn't obvious, just like we don't see the benefit of dropbox over other tools. But certain demographics will see tarsnap/ dropbox as value added, and is willing to pay for them (with good reason, too).

I know a lot of developers who have never spin up an EC2 instance, can't get their way around setting up a server, and certainly is not interested in maintaining an offsite server for backup. To them, tarsnap with its command line provide enough simplicity to be used (of course, it can be much better, as patio11 and alot of people pointed out).


Plugging together?

I'm talking about working tools. They either do everything when invoked, or write to a file/dir on disk that can then have rsync invoked to copy offsite.

Im talking about maybe a 4 line shell script, if that.

If someone can't handle that amount of setup, maybe they shouldn't be the person setting up mission critical backups?


Check out the key roles; you can split up writing and deleting archives, so - for example - a hacked machine can't delete the archives. This is nice.


I contacted the author today. He responded to me in 30 seconds.


Try in 18 hours. Can you call him when something fails?

I'm not saying he isn't responsive I'm saying depending on a one-man-band who is responsible for the client software, server software and the underlying storage system (ie he is the owner of the s3 account) seems like a huge risk.


I assume he still has to sleep, at least on some days. :)


tarsnap is not open source:

"While the Tarsnap code itself has not been released under an open source license, some of the "reusable components" have been published separately under a BSD license"

http://www.tarsnap.com/oss.html

The source code for tarsnap is available to view, so you could audit/inspect it yourself, but it is not under an open source license.


But its not cheap, which was my point. 100GB of storage costs:

$300/year at tarsnap

$36/year at S3


Finally, numbers other than picodollars and gigabyte months and unpredictable deduplication. This convinces me I don't want to store 4TB there at a huge cost($12,000 if it's really $300 a year for 100GB) compared to buying two 4TB drives (~€250 per 3-4 years) and placing them at a friend's with free bandwidth.

Don't get me wrong: managed, off-site encrypted backups are very attractive, and I might be willing to pay a premium, especially for software from a trusted person, but not the cost price hundredfold.


Tarsnap isn't intended to be used as one-time backup like that, and it's super expensive if used that way. It's very cheap when used to backup (almost) the same 4GB for 1000 days in a row, which is what a lot of people/businesses need for their backup solutions.


It's not one time, I'd be incrementally writing updates to the disks. With a raspberry pi or something, the power costs are near negligible.


Rough estimate here:

If you upload 4tb in a year, that's 333.33gb/month

So for tarsnap that equals

- $1k/year in data transfer charges (4000gb * $0.25 transfer charge * 12 months)

- $83/month per month of data (333gb * $0.25 storage cost/month)

- $6.4k/year for the first year ($83 * 78 cumulative months in a year)

So $7.4k for 12 months resulting in 4tb

If usage stays the same each year will add $12k to the incremental yearly cost


> that's 333.33gb/month

I have 4TB of data, which changes an unknown amount (probably around 20-50GB per month) and grows slightly (probably 5-15GB per month).

In any case, thanks for the calculation. Tarsnap is apparently not for the common person who wants to back up everything including their media.


That actually works out worse - $14K for the first year, $13K for the second year.


Add another $1k to that for data transfer (assuming you only upload that 4tb once)


That sort of backup is what AWS Glacier is for, is it not?


I guess, I haven't really looked at it yet. And I'd have to find my own software to encrypt it before uploading. Tarsnap's software is one of the major selling points, at least to me.


I hate to think the cost if you had to restore that data from glacier though.


How much do you save after dedublication? Tar-snap could be a lot cheaper if you do frequent backups or you often change little in huge files.


Backup tools like attic (which I use) include automatic deduplication. There are surely minor differences in implementation, but tarsnap isnt the only implementation of deduplicating backup.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: