Hacker News new | past | comments | ask | show | jobs | submit | vhab's comments login

The comments in this thread are very interestingly making the Author's point.

Yes, you can be "technically correct" saying branches are just refs, but it's not a useful statement for most users.

I believe the author makes a very valid point, and we could do with a bit less "technically correct" and more with language targeting the usage rather than technical implementation.

Git is confusing enough for many people, and we don't have to make it more confusing for them.


I wonder if this has more to do with how people visualise commits than branches per se. I think the git UI by default often encourages users to think of commits as diffs. A branch then is a bundle of diffs that, when stacked together, produces the current state of the repository. Internally, git uses some sort of optimisation to make calculating that current state quicker. In this mental model, it doesn't really make sense to talk about a pointer to a specific commit, because a commit without the rest of the information that makes up its branch is useless.

The problem is that this mental model isn't that useful in the first place, and often leads to confusion. Instead, it's usually easier to think of each commit as a complete snapshot of the entire codebase, that includes a link to the previous complete commit that it was made from (which in turn contains a link to the previous commit, and so on). In this scenario, a branch is just a pointer to a given commit - that's pretty much the easiest way to think about it - and the commit itself is a stack of history. Internally, git optimises for compression by removing redundant information between different snapshots.

In this mental model, thinking about branches as just a different type of tag is easier than thinking about it as a stack of commits, because each commit is already a stack of commits. Moreover, I think the snapshot model often ends up being clearer and easier to use overall. All you need is the basic concepts of snapshots, a linked list, and pointers, and the whole thing kind of just falls into place.


A ‘series of snapshots’ vs a ‘series of diffs’ are just duals of one another.

Since you can go back and forth between them at will, it seems odd to claim that one perspective is inherently superior. Like insisting that a chess board is actually a white board with 32 black squares on it.


It doesn't seem odd to claim that one of them is on average easier for beginners to use for intuition than the other, and that introducing both models simultaneously may bring more confusion than clarity.

I don't see claims of inherent superiority or correctness. It's about what's useful for education.

Nobody is arguing that you need to adopt a new mental model if what you have works for you.


There are some subtle differences in whether the canonical representation is a snapshot of the current state of the repo or a patch applied to the current state. One simple example is reordering; assuming your modifications don't change the same line, reordering patches arbitrarily won't change the final result. If you instead store snapshots of the state at each point, then reordering the snapshots won't necessarily result in the same final state, since you might have moved a different state to the final position.

You're correct that the two models are equivalent, but version control is about operations that you perform on the models, and those operations will not be the same for both models. You can reason about your git history as if its a series of patches, but git itself doesn't know how to deal with any model other than snapshots.


Given that git commits are immutable, reordering doesn’t matter. Any history rewriting involves creating new commits - whether those are new snapshots or new diffs.


The snapshot model is the correct model. How data is optimized via compression techniques is secondary. Thinking in terms of "diffs" is incorrect.


Targeting the usage is exactly what makes git confusing to people. You can start using git just by learning when to type "git add", "git commit", "git push" and "git pull" and you'll manage to collaborate somehow, but it will all fall apart the first time you stumble upon an unfamiliar situation. And because Git's UX isn't great, it's pretty hard to create a right mental model just by using it and inferring from the interface.

If you start by creating a mental model, the confusion goes away. Reminding people that "branch is just a ref" is just a way to push them towards less confusion.


Honestly, I think that if "git lol" was the default log command it would do the most to make things much more obvious to newcomers.

  git log --oneline --graph
And git lola for a gestalt of the repo's recent state:

  git log --oneline --graph --all


Just think about how useless and confusing the GitHub's history view is as soon as a merge is involved. Countless times I pulled something from there just to browse the graph because of how unhelpful the web UI is.


For how popular it is, GitHub really sucks: internet has simply miserable ways to visualize commits :/.


I'm still looking for a tool which produces a timeline similar to Fossil's, but for git.

Example Fossil graph: https://chiselapp.com/user/rkeene/repository/kitcreator/time...


I'm probably missing the details but doesn't most git GUI show a timeline like that? Such as the official desktop version and official (?) gitlens extension in VSCode? I don't use them myself though so I might be wrong.


gitup and other git clients for example do this: https://gitup.co/


I can't try gitup since it seems to require macOS -- ideally something web-based similar to Fossil.


If you do that ^ a lot then look at the `tig` tool which is that with an ncurses ui (and some more features)


VSCode users may enjoy the extension "Git Graph."


This, so much this. Lots of times where I've seemed like a git whiz, it's really just that I've got a marginally better understanding of how git really works. Git is much easier to use when you wrap your head around how commits and branches are internally represented.


I think there's some confusion around the meaning of "internally represented" seen in this thread. I wouldn't really call it "internal representation", as then people complain that a tool shouldn't make them learn its implementation details - and they're right, but that's not what happens here.

You don't have to learn how git-the-tool represents things internally. However, you absolutely should learn how git-the-model-of-a-repository represents things, because that's what you're operating on. Git is a tool to manipulate repositories, just like LibreOffice is a tool to manipulate documents. You don't need to learn how ODF stores things in zipped XMLs (just like you don't have to learn how git stores things in its content-addressable filesystem), but you need to understand what paragraphs, words, pages or slides are as this is the model you're working on (just like you need to understand what commits, branches and refs are and how they form a graph).

Unlike LibreOffice, git doesn't make it easy to understand its model just by using it (you could even say that it actively misguides you, although it has good reasons to do so), so you usually have to read some docs to grasp it.


I don't think git is unusual in that regard, and I think those complaints would be unjustified. Loads of tools work totally fine with only the barest understanding of how to use them, occasionally have problems that require a bit more understanding of their internal model, and even more rarely require deep knowledge of their internal model. I think most development teams would be totally fine with only a single member who has a slightly better understanding of the internal model. That knowledge only comes into play very rarely for me. If nobody were available with that knowledge, those teams could make do by simply copying the work into unmanaged text files for a few minutes and then just "manually" override the botched merge that got them into trouble.


Yes, I think it's a common case of getting the fact right, viz. branch == just a ref (true), but the understanding wrong, viz. ref == commit (false).

A commit is an immutable object. Whereas a ref is a pointer, literally a place on disk (a regular file) that holds an address (plaintext SHA) to the latest point in a logical chain of commits.

Meta remark:

This is also what makes it ok to delete a branch after it is "done", and why it is ok to merge a standard working branch (like "develop") repeatedly into a target branch (like release/master/main/trunk).

The semantic / meaning of a branch is transient. It is mutable conceptually and literally.

edit: formatting


Sort of an aside, I find it funny that the reflog is named that and not commitlog. With this mental model when you look at the reflog you usually want to get back an immutable commit because you've lost the ref. I know it displays the commits and the refs, but does anyone actually look at the reflog and checkout HEAD@{6} or do they use the commit sha?


reflog it a tool to show you the history of a given ref - if you don't give it any, it defaults to HEAD. It seems to me like "reflog" is the perfect name for it and I don't see how "commitlog" would be relevant to what it does.

Did you confuse "refs" (references) with "revs" (revisions)?


No, I was going based off the mental model I replied to:

> A commit is an immutable object. Whereas a ref is a pointer, literally a place on disk (a regular file) that holds an address (plaintext SHA) to the latest point in a logical chain of commits.

Reflog shows you the immutable commit SHA and the HEAD@{N} ref. I've only ever used it to get back to a commit I've lost, never by ref, so to me it's a commitlog.


HEAD is a ref just like any other. What you're looking at after typing `git reflog` is the history of things HEAD has pointed to - it's HEAD's log. Refs don't necessarily have to point to commits, they can point to other objects too.

HEAD@{<N>} is not a ref - it's a rev in <ref>@{<N>} form that means "N positions back in ref's history" (see `man gitrevisions` for more rev forms).

> never by ref

When you look at reflog's output, you've already dereferenced these commits by the given ref and its history.

Try `git reflog <branchname>`.


Yes, I've done that, because the reflog keeps more than just commits. It also keeps checkouts, merges, steps in rebasing, etc. So I've checked out a HEAD@ ref, when I made a mistake in merging or rebasing.


> Git is confusing enough for many people, and we don't have to make it more confusing for them.

Using terms with the wrong definition, and not precisely defining concepts, makes things more confusing, not less.


I'd argue git is confusing for many people because they don't understand the data model. The solution is to learn the basic data model instead of pretending that "a branch is a ref" is not true. Because it is true.


A branch is not a ref.

A head ref is a ref that names a branch. But branches can exist in git without refs. Branches are artifacts that exist in the commit DAG - they are dangling chains of commits that end without being merged in to some other commit. They exist, as pure platonic branches, even if they are un-referenced.

But then you can make a head ref and name one and now all of a sudden you have a named branch. As you make more commits that extend the branch while ‘attached’ to that head, the head ref follows the tip of the branch (that is in particular a thing a head ref does that a tag ref does not).

But you can add commits and extend a branch in a detached state of you like - no head refs following the branch tip. Yet the branch definitely exists. And then if you tag it, you name it.

So no, I don’t think “a branch is a ref” tells the whole story.


This is a strange take, in my opinion. Dangling commits like those you describe will be cleaned up by the the garbage collector. To say that a “branch” exists without a branch ref pointing to it is at best purely pedantic. Without a ref there is no meaningful branch because it will disappear eventually.


For anyone reading this who would like to learn about the data model, I highly recommend following along the "gitcore-tutorial" manpage. Like actually type the commands and play around with the results. Once you understand what's going on under the hood, the UI commands all make intuitive sense.


The author and people who insist on teaching (just) that "a branch is just a ref" fighting over the wrong point. The important part is to understand that each commit is itself both a complete snapshot of the repository and a sequence of commits that led to that snapshot (or more correctly, that it doesn't make sense to think of a commit without thinking about its pointer to the parent commit). That seems weird first, but everyone who understands how git works has internalized that, whether they explicitly think about it or not. After you understand that, it becomes easy to see that both the "technically correct" point and the author's point are kind of equivalent ways of saying the same thing.

But without this understanding, being told "branches are named sequences of commits" is probably worse than being told "a branch is nothing but a ref". The second one is cryptic and will soon be forgotten, no harm done. The first one leads you into a false sense of understanding, and soon you'll see an operation that looks like deep magic: someone moves the branch ref and now suddenly the whole branch is a completely different sequence of commits.

The confusion experienced by many people is largely due to the fact that a lot of articles try to teach git in a way that gives a false sense of understanding without explaining how git really works, which is exactly what teaching "Branches are named sequences of commits" does.


While I agree the title in a vacuum has the ability to mislead, the article itself is a critical piece, not a tutorial for beginners. I don’t think much harm was done here.


It's interesting how I strongly disagreed with you before reading the linked blog post, but I fully agree with you after reading it.

I still rather think of "git branches" as the technically correct "just refs" and hold the separate-but-related human-only concept of "development branch" in my head. I don't think there's a better approximation of truth than that, nor do I think it's that more complex to understand.

But the fact these two concepts share the same name truly is confusing. One would do better to refer to the former as "bookmark" or "branch tip".


I'd argue that in the long run, thinking of branches as the entirety of the history before a commit causes more confusion. I'd propose that the most useful way to convey the idea of branches to new git users is to start with the concept that every commit after the initial one has one predecessor, which means that you can always trace back the history of a commit by following the predecessors back to the initial commit, and then introduce the idea of a branch as a name that refers to a given commit. Combining these two ideas means that for any commit, you can definitively state whether or not it exists in the history of the commit that the name points to, and that commits that are part of that history are conceptually considered to be "in" the branch. Then you can introduce the idea that you can "update" the commit that a branch points to, and that the only way to add a new commit is to "increment" a branch to point to a new commit after the current one it points to.

This establishes enough information for you to show how using a git repo actually works; at any given time, you're looking at one specific commit, either directly or via a branch's name. If you're using a branch, then committing will perform the "increment" discussed earlier, with the branch now pointing to that new commit. Showing how to create a new branch will naturally lead to the discussion about how you can have two branches pointing to the same commit; this lets you explain that adding a new commit without specifying a branch name can be ambiguous, which you can demonstrate by checking out the current commit directly rather than by a branch name. Once you've shown that adding a commit requires either checking out one of the branches you have that point to that commit or creating a new one, you can show that the same principle holds for any other commit in the repo as well, even ones further back in the history with no branch currently pointing to them. You can use this opportunity to introduce the concept of `HEAD` as the unique name for whichever commit you're currently looking at, and that looking at a commit directly rather than via a branch is called having a "detached `HEAD`", which means that you won't be able to make any changes without creating a branch at that point first and "reattaching `HEAD`" to that new branch.

If you're trying to teach git to someone who hasn't yet learned the equivalent of an intro to data structures class in computer science, it might be worth simplifying the concept of branches in the way you describe. If you're teaching someone who already understands what a tree is, you're doing them a disservice by trying to hide the model from them because they have more than enough to understand what a branch actually is.


This was true back in Subversion too!

> Creating a branch is the same as creating a tag

> Tags merely exist to pinpoint a specific repository revision


It seems to me that people are confusing the CM concept of a branch with the way that git has chosen to implement it.


There is no single CM concept of a branch.


Interestingly though, there's a slow but steady push towards 24V on boats. More electronics are available in 24V version or simply support both 12V and 24V. And new boats tend to have a 12V and 24V bank. Electric sailboats usually have a 48V bank as well.

And in an entirely different area, 3D printing completed their transition from 12V to 24V some years ago, and currently there's an active push towards 48V.

I'm personally just looking forward to not spend a fortune on cables due to low voltages. It's frustratingly expensive to carry 12V a decent distance with some higher current draws.


Unfortunately this doesn't surprise me.

We worked on a Stadia title before launch. We were constantly reminded by Google how big the YouTube integration would be, which unique killer features we absolutely had to integrate with, and more.

And non of that ever materialized after launch. If Google can't even convince their own internal teams to cooperate, how do they expect studios and consumers to care the slightest for their product.

It also didn't help that supporting Stadia was equivalent to supporting an entirely different new console in scope, except less battle tested and much more buggy. Meanwhile all their competitors allow existing console or Windows builds to be shipped to their platforms.

And while we're sharing anecdotes, this was a fun one. For the longest time devkits were limited to 1080p, but at least the output was streamed from rack mounted servers that supported a couple of concurrent sessions. A few months before launch, they finally made 4k devkits available, except they supported only a single session, couldn't stream, and instead had to sit at a developer's desk with a monitor hooked up...

Let that sink in, a streaming service's devkits couldn't stream :)


From the consumer perspective, this reminds me of the new chromecast that was released without Stadia support, even though the previous chromecast supported it. Get that! A streaming stick that couldn’t stream the company’s own paid service. Preposterous!


Do you think the YouTube integration and the other "killer features" you mentioned would have made Stadia more popular if they actually came to fruition?

Personally I believe if the YouTube integration was ready at launch the whole Stadia story would have been very, very different. I do believe that a frictionless way to jump into a game that you are currently watching a video/stream for or even join a streamer in multiplayer with a click would have been an amazing thing!


IMO streaming games still has a lot of potential. Too bad Google couldn't pull it off.

The Youtube stuff is only the surface of what would be possible.


> IMO streaming games still has a lot of potential.

Not with the current internet speed.

The vast majority of people is below anything that would play "okay", and almost everyone is below a speed that would play well (1 GBPS).

Until 1 GBPS is the default EVERYWHERE, streaming games has 0 potential.


Stadia ran (runs) well at 50mbps, and their competitors don't require much more (~100mbps for comparable results afaict), and 2x that minimum often results in a flawless experience if you have the bandwidth/latency to back it up (e.g. if you're on a home/work connection, rather than a busy coffee shop).

I put almost 1,000 hours into Stadia across all my games travelling across ~20 states and 3 countries the past ~3 years. It's very rare to find places where it isn't "okay" to play (with some notable exceptions near launch where you'd regularly get ~1 second input delay at times or frozen, pixellated graphics), and in many places now it feels indistinguisable from native/local games.

I don't know which platform I'll move to from Stadia, but it will definitely be a cloud one.


It does not run well at 50mbps, you have artifacts all over the screen, it's unbearable. And as people are moving more towards 1440p or 4k monitors, it's even more untolerable.


Streaming problem is not about bandwidth, but about latency. With current technology and physics there nothing you can do about latency.


It is both. I tried GeForce NOW and it required only about 60Mbps - it was just a fraction of Gbit connection. Still, I sometimes struggled to keep it at that minimum. Variable bandwidth doesn't matter for file downloads or video streaming(where there is few seconds of buffer) but it makes game streaming almost unplayable.


Nah, if you can stream Netflix in 1080p or better and have low latency then game streaming works fine. I know people who do it off LTE without issue even for non competitive games.


I encourage you to watch a Netflix movie and a live video game side by side and you will see how nonsensical the comparison is.


Doing this now... what are you expecting to be obvious from this experiment? Obviously the video game has some upstream requirements (just user input), but neither are stuttering or having any issues.


The quality is simply incomparable. A 4K movie streamed wouldn't even compared to a 1080p game being ran.

You're putting side by side compressed and raw visuals, it just doesn't compare at all.


The GP comment you were referring to recommended streaming Netflix in 1080p to, assumedly, compare to streaming games in 1080p, too; not to compare 1080p versus 4K. If you can stream Netflix in 1080p, there's not much additional strain on the network to stream games in 1080p.

Side note: Stadia also supports streaming games in 4K, which will have a relatively equal quality to streaming a 4K movie, for the same reasons. That is the result I see while streaming a movie and a game side-by-side.


Netflix in 1080p is nowhere near the quality required to play games in 1080p. It is heavily compressed, whereas games need crisp precision because it contains a lot of text that should be rendered precisely, as well as having pixel perfection for a lot of in game elements. To convince yourself look at Twitch, which has higher bitrate than Netflix for a similar resolution, and realize than even that is far from good enough to be playable.


Also video encoding must be done in strictly realtime for gaming. Twitch also prefers realtime for communication but not so strict. Netflix pre-encode their videos so they have significan quality advantage even in same bitrate. So Stadia needs more bitrate than Netflix for same quality.


You can get high quality streaming video with much less than 1GBPS, low latency and consistent speed + latency are the important parts.

(Needed bandwidth will still be higher than regular video streaming though, as you have to compress in real time)


I Stream locally from my gaming PC to Nvidia Shield in 4k with around 80 mbps and it looks as good as an HDMI hookup.(and less than that would look fine)


Yeah streaming locally you don't even need internet buddy, the speed of your connection is totally irrelevant if you are on the same local network.


> a streaming service's devkits couldn't stream

That is just mind boggling - how on earth were you meant to test anything properly?


> how big the YouTube integration would be, which unique killer features

Exactly, this was advertised so much even to regular users/consumers and it genuinely seemed like it could be really cool. I'm baffled that nothing really came out in the end.


From the license in the same repository:

> Furthermore, please be aware that opening the outside enclosure of your Steam hardware will void a warranty you may otherwise enjoy (https://support.steampowered.com/kb_article.php?ref=4577-TUJ...).

I'd say that significantly diminishes the good will statements like "you have every right to open up your Steam Deck and do what you want with it." buys them though.


So if you crack open your unit and break something while messing around, you expect Valve should fix it for you for free under the standard warranty?

Allowing you to make the choice between hacking or not seems like perfectly good will to me. You can't have your cake and eat it too, as they say.


> So if you crack open your unit and break something while messing around

I don't expect a warranty from that, but I very much do expect a warranty if I crack it open and don't break something while messing around.

Thankfully the actual warranty doesn't say it's voided by opening.

Some of the exclusions on there are worrying, though. Improper cleaning? Refurbs have no warranty? "Commercial use" is much too vague.


Valve can't invalidate your warranty if your opening up the deck wasn't responsible for the damage, at least in the US.[0]

So yes, opening it up and improperly cleaning it (for example, you take a q-tip with rubbing alcohol to clean off some gunk and accidentally dislodge a capacitor) could void your warranty.

----

[0] https://www.ftc.gov/news-events/press-releases/2018/04/ftc-s...


> So yes, opening it up and improperly cleaning it (for example, you take a q-tip with rubbing alcohol to clean off some gunk and accidentally dislodge a capacitor) could void your warranty.

My complaint is that the warranty page only talks about whether you caused the damage for the clause about opening it up or making modifications. The clause about warranty loss for cleaning "in any manner other than as specified in the Hardware manual" is unconditional.


A lot of people don't think they damaged something by opening it up...

But when the put a screw back too tight, which then a year later cracks the plastic support its screwed into, falls out, and shorts some terminals on the voltage regulator, and the whole thing goes up in smoke...


Then it sounds like the person voided the warranty by assembling incorrectly, not by opening it.

I’m sure that if people opening their devices, breaking them, and expecting warranty to cover it is such a big problem, then there is enough financial incentive to find an actual solution.


I don't expect to keep warranty, it just diminishes any praise you could give for other statements they make like quoted in the OP

> you have every right to open up your Steam Deck and do what you want with it.

We already have this right for things we own, on exactly the same terms, which is it voids warranty.


Except that merely opening something _does not_ void the warranty in the US. It is in fact illegal for a manufacturer to claim that it does! It has been illegal since 1975 and people still believe this crap. Even repairing something does not void the warranty.

https://www.ifixit.com/News/11748/warranty-stickers-are-ille...


Just noticed that they use different wording on the readme:

"Any damage you do will not be covered by your warranty – but more importantly, you might break your Steam Deck, or even get hurt!"


Early attempts at embedding browsers in games were based on Gecko as well.

But it didn't become more widespread until a switch to Chromium was made, which made the entire process a lot simpler in both execution and distribution.

And there hasn't really been a need to revisit Gecko.


Wasn't Rust specifically invented to rewrite Gecko?


Rust was a personal project of Graydon Hoare. Mozilla jumped on the Rust train 3 years after its inception. https://web.archive.org/web/20160609195720/https://www.rust-...


> They managed to disturb millions of worker to an "attention driven" work culture in which everything needs to always be synchronous and immediate.

I see where you're coming from. But anecdote time.

The immediate communication fixed something for us which would previously be a more disruptive tap on the shoulder, or alternatively an only once a day processed e-mail. Slack gave our devs time to finish their thought, write out that line of code, before tabbing to Slack to see what's up.

Because you see, I love my team, but they're not perfect. Just like the vast majority of people they're imperfect beings working with imperfect information. And in order to get them to output quality code (as in, does what it needs to do, bug free, without incorrect assumptions about data) they need to communicate to each other and me. We can't wait until a PR to catch they didn't fully understand these data models setup by another guy. Nor can we wait until a PR to realize someone took the wrong approach trying to fix a problem.

Someone getting disrupted might mean someone else can progress with their task. What I'm trying to get at, I need my team to communicate and communicate often. We have plenty of issues, just like any team, but most of them come from the lack of communication. Slack, or any other similar platform allowed to strike somewhat of a happy medium where the barrier to communicate isn't too high nor too low. It's less formal and faster than an email. And keeps a better log than an in-person conversation would.

Added bonus, it also helps us to have a more liberal 'work from home' policy.


What previously was a tap on the shoulder is now a message and a tap on the shoulder after three minutes if there is no reply.

Slack has become the worst possible amalgamation of email and telephone. When previously minor things were discussed asynchronously over mail, only major and immediate issues warranted a phone call. Now all kinds of noncritical correspondence gets pushed to an instant messenger application, where every issue has to be paid attention to immediately.


Do people really do that? Did the same ones not tap on shoulder a short while after sending an email too?


None of this is unique to Slack, it's just a consequence of using a chat app, or email.

The only thing that allows you to have a liberal work from home policy is a healthy company/team/corporate culture.


> If hiring a freelancer to do a little thing tends to generate all sorts of nightmarish legal liabilities, your company is either a government contractor or it has even bigger structural problems than hiring.

It might not be an issue for your company, but your candidate might have a non-compete (or other clause) preventing them from accepting compensation or even performing work for another company at all.

Things get trickier as soon as money is involved.


Any company trying to enforce such a non-compete against someone who just earned $200 would spend a lot more than that in legal fees. This feels like legal paranoia to me.


Furthermore, in German we have a saying: "Where there is no judge, there's no executioner". Maybe this attitude is more popular in Switzerland than in Germany, but the basic meaning is: don't sweat the legality of small exchanges that (a) don't affect anyone negatively and (b) aren't in some government agency's crosshair. That being said, your mileage in different legal systems may vary - in our civil law system it's probably more common for a judge to simply throw out such trifling matters and people are generally also way less inclined to start a lawsuit.


USA is kind of special regarding legality handling. An unhappy candidate my find legal arguments to complain. In Europe, states may be unhappy that work has been done without tax payment.

This payment looks like easy money for cheaters who would copy ready made solutions or ask friends for help. I'm not convinced, but this is indeed how things should be done in an ideal world.

I guess we could also ask candidates to pay. That is what is done in engineer schools in France. Not sure this is a good filtering method because of the Donner Kruger effect.


> In Europe, states may be unhappy that work has been done without tax payment.

Well, it's a good thing we have (now domestic only) banking secrecy in Switzerland, for exactly these reasons. You only get into trouble when your livelihood doesn't make sense from what you earn and own - as long as you pay what's generally considered a 'fair' amount of tax, no-one will blink an eye and come after your documentation. And even if for some reason the government finds out, all that will happen is a penalty tax ranging from 20 to 300% of the missed revenue, usually 100%. There is an explicit difference between tax fraud (document forgery, which could mean jail) and tax evasion (when you 'forgot' to declare something, which leads to above penalty.

Overall, the system is set up so that the government can't abuse its asymmetric power over individuals, instead keeping it as close as possible to an 'employee' of the people.


To me it feels like classic "A-ha! I discovered an edge case so the entire solution is invalid!". Just handle the edge case differently to how you handle everything else.


HR don't care they can afford the cost where as an individual employee is unlikely to be able.


Then the candidate says something like "Well, you know, I can't take any payment before I'm out of this job. Can't I just do it for "free", and payment come as a bonus on hiring?"


What if you don't get an offer or if you decline their offer?


It's a risk you take, or don't, it'll certainly depend on how compelling is the job.

But you are the one that placed yourself in a bad situation. They have a sane procedure that fully respects you, but isn't fine-tuned to your current problems. You can try negotiating their procedure too, but in their place (what I'm not, I'm in the "can't bill you" place right now), I'd refuse to.


I wouldn't do that. But then, I don't work for free.


If you're in tech, Sweden is a more attractive opportunity. It's pretty similar to the Netherlands culture wise, but with a much bigger and faster moving tech scene.

That said, any in that line up will be a bit of a culture shock coming from the US.

Source: I'm Dutch and moved to Sweden


Agreed on the culture-shock. None of it will be too bad, probably, but there are enough little differences that it'll take a while to adjust, and not all people can/will manage it.

Source: Scottish, moved to Finland.

(Biggest pain-points for me: the Finnish language, and the brutal winters.)


If I'm programmer/developer looking for job in Sweden where can I find offers? How do you feel about taxes in Sweden? Aren't they very high in nordic countries?


I'm in Norway. And yes, taxes are high, but then again, they really aren't.

I think when I did the math, the taxes weren't really all that much higher than the taxes + health insurance costs in the states. It feels more like I'm paying a tax instead of outright fees for some things. Some I don't notice as much - such as the 25% VAT, mostly because it is included in the listed price.

Cost of living is high here, but Sweden is supposedly cheaper. Enough that there is a bus that goes over the border for Norwegians to shop.


Here is a common job aggregator site for Sweden:

https://www.jobbsafari.se/jobb/it/systemutveckling

Most of the ads are in swedish, but google translate does a decent job of translating.

Don't be afraid to apply in english, majority of IT companies does not mind hiring english-only speakers.

Not sure about the rules on getting a work-visa in Sweden.


There seems to be a lot of conspiracy theories going on, on what I personally believe to be a really straight forward and logical reaction.

From my perspective, "Bash on Windows" is a reaction to the wide adoption of OSX for development (In particularly web development seemed to have a mass migration).

For obligatory anecdote:

I'm a longtime and loyal Windows user. Windows has been my primary development platform even though parts or all of our stack ran on some flavor of -nix. I've always encouraged my team to do the same, primary motivated by the better UX on Windows.

But, like many in the past few years, I jumped the OSX bandwagon and moved the entire department over. This wasn't a fun transition, and came with many pains. But ultimately it was a necessary transition as the tools we needed just weren't supported on Windows.

Development became more complex, tooling became mandatory at every stage of development and only OSX offered us a reasonable balance between a -nix-like environment that ran the tools with decent UX.

Microsoft's move to bring Bash to Windows will likely motivate me to migrate back in due time.

While some may be spinning conspiracy theories, I'm personally just really glad Microsoft is moving in this direction.


You talk like developing on Linux means doing everything in the command line. *nix UX is nothing to be ashamed of, and I personally prefer it over Windows and OS X (especially OS X).


From an IT and "just works" perspective, Linux was a no-go for us.

It wasn't particularly about the UX on Linux desktops, but rather they don't fit in our company culture when it comes to how assets and IT are managed.

Which is why I wrote it from my own perspective as an anecdote.

As usual, YMMV


Personally, the reason I like OS X over a random linux distribution is the sane keyboard shortcut defaults - CMD+C and CMD+V just work everywhere, as well as things like CMD+W.

Well, almost everywhere. And CTRL+F is still a little wonky depending on the app you're in.


For me it's the other way.

In Linux there's a clear separation of CTRL for sending messages to the app --ALT for commands-- and SUPER for messaging the OS.

I found OSX very confusing trying to mix everything into a single key.


Oh, that's an insight I've never thought about. It never occurred to me that they had different purposes.


Where on Linux do Ctrl-c, v, and x not work? I haven't had a problem with them anywhere in a decade. I have largely stuck to Gnome2, MATE, and XFCE, though.


The problem with Ctrl C is that it is also the shortcut for SIGINT when the terminal is focused.

Also most terminal emulators will forward all Ctrl combinations directly over the TTY rather than capturing them in the windowing system, so in practice Ctrl-V rarely works in a terminal either. Likewise for Ctrl-W, which is typically bound to backwards-kill-word, etc.

The way it ends up in practice, shortcuts involving the Command key on OSX end up being clearly defined and consistent, because apps typically can't override them.


You need ctrl+shift+c and ctrl+shift+v, etc. in a terminal.


As a die hard Linux fan, I'll admit that it'd be nice if ctrl+left/right worked the same everywhere.

On most editors it moves me one word, on the command line it inserts the control characters.

Same story for ctrl+backspace and ctrl+a.


Ctrl-v doesn't work in lxterminal.

("Paste" is on the right-click menu though.)


Yeah, there's a development tool I was watching that worked entirely the same on Linux and OS X, but then it has to have a special sidebar for 'if on Windows, do this'. And this will likely be able to go away with Bash for Windows.


I agree with you 100%. The only thing I worry about, with "BASH on Windows" is how separate the subsystems are. On OS X I can interact with the desktop using osascript. Is there anything like this in the new Windows stuff? I fear there will not, based on what I've read so far.


More anecdata of that: we all have Macs for doing Java web server development at my job, which is for the UC (Davis), rather than some startup.


While not Docker Cloud specifically, when we eyeballed UCP we found it very underwhelming when pitted against Kubernetes.

To us it appeared yet another in a sea of many orchestration tools that will give you a very quick and impressive "Hello World", but then fail to adapt to real world situations.

This is what Kubernetes really has going for it, every release adds more blocks and tools that are useful and composable targeting real world use (and allow many of us crazies to deal with the oddball and quirky behavior our fleet of applications may have), not just a single path of how applications would ideally work.

This generally has been a trend with Docker's tooling outside of Docker itself unfortunately. Similarly docker-compose is great for our development boxes, but nowhere near useful for production. And it doesn't help Docker's enterprise offerings still steer you towards using docker-compose and the likes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: