Hacker News new | past | comments | ask | show | jobs | submit login

Linux in part won because the Regents were getting sued at a critical time. Without linux we would run BSD and it would be fine.

Git is good in some ways, terrible in others. I've used it for years and still don't feel really comfortable with it, but I've never had something as multiheaded as the linux repo.




Not really. Linux won because the GPL has proven to be more suitable to businesses, and especially commercial open source business models as practiced by companies such as Cygnus and Red Hat.

BSD-style licenses tend to be a better fit for business models that sell proprietary extensions to free software. These form lock-in moats that inhibits the growth of any deeper ecosystems. We've seen this over and over again with things such as graphical subsystems for non-free UNIX, but also more recent examples with firewalls and storage boxes. Those are great for what they do, but work on the free parts are seen more like a gift to the community than a money maker.

The tit-for-tat model of the GPL enables those ecosystems to form. By forcing your competitors to free their code in exchange for yours, game theory dictates that moats cannot form, and when everyone stands on other's shoulders development is faster.

I'd say that's pretty much experimentally proven by now. Of course, reality is not as black and white, especially when GPL-style companies contribute to BSD-licensed software and vice versa. Perhaps PostgreSQL is a prominent example of that. There are however traces of these patterns there too, for example in how the many proprietary clustering solutions for the longest time kept the community from focusing on a standard way.


> Linux won because the GPL has proven to be more suitable to businesses, and especially commercial open source business models as practiced by companies such as Cygnus and Red Hat.

That's an interesting take, but I'm not sure I understand.

Are you saying Linux would've lost (presumably to proprietary OSs?) if it used a permissive license?

If the GPL has been proven to be more suitable for business, why is the use of GNU GPL licenses declining in favor of permissive licenses?

I don't have a hog in this pen and so I'm not trying to provoke. I'd just like to hear thoughts on why it looks so different from where I sit.


Not the parent, but here's my thoughts:

The size and quantity of companies working on things related to a project determines whether a strong copyleft license or a permissive license makes the project more successful

Say you want to make a business around a FOSS project. Which license should you choose for that project?

If your business starts gaining traction, people may realize it's a good business opportunity, and create companies that compete against you.

I'll simplify to two licenses, GPL and MIT. Then there's two options, based on which one you chose originally:

1) If you chose the GPL, then you can be sure that no competitor will get to use your code without allowing you to use theirs too. You can think of this as protection, ensuring no other company can make a product that's better than yours without starting from scratch. Because everyone is forced to publish their changes, your product will get better the more competition you have. However, your competitors will always be just a little behind you because you can't legally deny them access to the code.

2) OTOH if you chose MIT, a competitor can just take your project, make a proprietary improved version of it and drive you out of the market. The upside is if you get to be big enough, you can do exactly that to _your_ competitors.

You can see that when you are a small company the benefits of GPL outweigh the cons, but for big ones it's more convenient to use MIT or other permissive licenses. In fact, I think the answer to your question "why is the use of GNU GPL licenses declining?" is because tech companies tend to be bigger than before.

Now say you want to make a business around some already existing software. And say there's two alternative versions of that software, one under the GPL and one under the MIT license (for example, Linux and BSD). Which one should base your business on? And contribute to? Well, it's the same logic as before.


In the context of Linux, think of the GPL as a joint development agreement, with teeth.

I used to follow LLVM development. There was lots of mailing list traffic of the form "I'll send you guys a patch as soon as management approves it..." followed by crickets.

Basically, RMS was exactly correct about the impact that loadable modules would have on GCC's development.


Basically, RMS was exactly correct...

The last thirty years in a nutshell.


> If the GPL has been proven to be more suitable for business, why is the use of GNU GPL licenses declining in favor of permissive licenses?

It isn't, at least not exactly. It's declining in favor of a combined permissive/commercial license model. And it's only doing that for products that are meant to be software components.

The typical model there is that you use a permissive license for your core product as a way of getting a foot in the door. Apache 2.0 is permissive enough that most businesses aren't going to be afraid that integrating your component poses any real strategic risk. GPL, on the other hand, is more worrisome - even if you're currently a SAAS product, a critical dependency on GPLv2 components could become problematic if you ever want to ship an on-prem product, and might also become a sticking point if you're trying to sell the company.

But it's really just a foot in the door. The free bits are typically enough to keep people happy just long enough to take a proper dependency on your product, but not sufficient to cover someone's long-term needs. Maybe it's not up to snuff on compliance. Or the physical management of the system is kind of a hassle. Something like that. That stuff, you supply as commercial components.


I don't agree, I think the license doesn't matters at all for users. And there really aren't that many companies distributing Linux, such that they would need to comply with the GPL. Google can make all the patches they want for their servers. The only reason for them to contribute them back is to offload the maintenance cost.

As with most things, I think Linux succeeded because it was a worse-is-better clone of existing systems that happened to get lucky.


It's a shame how poor the command-line usability of the git client is. Commands are poorly named and often do the wrong thing. It's really hard to explain to new users why you need to run `git checkout HEAD *` instead of `git reset` as you'd expect, why `git branch [branchname]` just switches to a branch whereas `git checkout -b [branchname]` actually creates it, etc.

I really wish he'd collaborated with some more people in the early stages of writing git to come up with an interface that makes sense, because everyone is constantly paying the cost of those decisions, especially new git learners.


I don't even know what `git checkout HEAD * ` does lol. Does it just checkout the current branch to the HEAD of that branch?

I can never seem to guess what things do in Git, and I consider myself fairly comfortable with the core concepts of Git. Having written many types of append-only / immutable / content address data systems (they interest me), you'd think Git would be natural to me. While the core concepts are natural, Git's presentation of them is not.. at least, to me.

edit: formatting with the * .


git checkout <reference> [<files>]

so, that says, copy all of the files out of the current branch at the current commit into the local dir. What this will do in practice is "discard current changes to tracked files". So If i had files foo, bar, baz, and I had made edits to two of them, and I just want to undo those changes, that's what checking out * from HEAD does. It doesn't however delete new files you have created. So it doesn't make the state exactly the same.

So why not just git checkout HEAD? Well, you already have HEAD checked out (you are on that branch), so there's nothing for git to do. You want to specify that you want to _also_ explicitly copy the tracked file objects out also. It's kind of like saying "give me a fresh copy of the tracked files".

The confusing thing is that in practice it is "reverting" the changes that were made to the tracked files. But `git revert` is the command you use to apply the inverse of a commit (undo a commit). One of the more confusing aspects of git is that many of the commands operate on "commits" as objects (the change set itself), and some other commands operate on files. But it's not obvious which is which.


That command would only discard changes to non-hidden files though, because * typically doesn't expand to hidden files. I think the command one really wants in these cases is

  git reset --hard


That throws away everything though, whereas `git checkout HEAD *` only throws away stuff in the current directory and below, or you can pass exact filepaths to be surgical about which changes exactly you're reverting. This is what I use it for most often -- reverting some, but all, edits.


Gonna risk getting my head put on a stake, but why not just use a GUI git client at that point like TortiseGit/GitKraken/SourceTree?


It's been a long time since I used a GUI source control client. Maybe I should try one out again. Certainly it makes diffs nicer.

It's just that I've been using git CLI for so long, and know exactly which commands to use in any circumstance without having to look them up, that I don't benefit much from switching to something new, whereas someone who hasn't yet put in that time to really learn git would stand to benefit more.


A small correction...

`git branch [branchname]` creates a branch without switching to it.

`git checkout -b [branchname]` creates a branch and checks it out.

And `git reset --hard` will also discard changes. (Arguably, this is better than `git reset` discarding local changes, as it is more explicit.)


Your comment only makes the GPs point stronger.


I don't think so; I'm not much of a developer but I've used git productively for years and I've never had to run the commands CydeWeys listed.

The commands johnmaguire2013 listed are the ones usually recommended for beginners and I have found them easy to understand.

"git branch [name]" is for creating branches; it tells you if the branch already exists. Pretty easy to understand.

"git checkout [name]" is for checking out branches; it tells you if you're already on that branch.

You can run these sequentially and it works fine; there's no need for `git checkout -b [branchname]`.

I think there is sometimes some productivity porn involved in discussions of git, where people feel really strongly that everything should be doable in one line, and also be super intuitive. It's a bit like the difference between `mkdir foo && cd "$_"` on the command line, vs just doing mkdir and cd sequentially. IMO the latter is easier to understand, but some experienced folks seem to get upset that it requires typing the directory name twice.


Mm, I'm not sure I agree. The first correction shows that the command works as GP would have expected. The second command shows why checkout works too -- because it is checking out the branch (like the GP expected) in addition to creating a branch.

And I have already explained why `git reset --hard` makes more sense in my opinion.

I agree that Git can be hard to wrap your head around, and that the commands could be more intuitive. But Git is complex in large part because the underlying data structure can be tricky to reason about -- not because the UI on top of it is terrible.


There may be complexity under the hood (I don't know), but in my experience, even pretty advanced users employ a pretty straightforward mental model that's considerably simpler than the commands themselves are to use.


I disagree. I think the commands are intuitive and work perfectly for the system. It's very expressive, but it all makes sense once you've learned it.

git is a tool. Different tools take different amounts of time to master. People should probably spend some time formally learning git just as one would formally learn a programming language.


Other version control systems don't have this problem as much.


And where are they now? If you're not using git as source control at this point, I wouldn't even consider downloading your project.



Git succeeded in spite of its CLI, not because of it.


It's a good point: just because an entire product won out doesn't mean that every single one of its features was individually superior to its competitors. This is definitely not true for git.


That's roughly the same as saying that you won't eat the food prepared by a chef if they don't use your favorite brand of knife. You've diminished the value of your previous comment substantially with this one.


The git storage structure is not that difficult. People could implement their own compatible clients on top of it with a quite different UI. That there is nothing that overtook git as UI in popularity seems to be an indication that the interface is not as bad as many people claim.


People have added better UIs on top of git. The problem is they don't come installed by default out of the box, and unless everyone else you work with is using them too it becomes quite hard for you to communicate properly over git issues (especially in writing development workflow docs). hg has a better UI out of the box, and is notably easier for new users to pick up and become productive with.

You're underestimating how much inertia is created simply by being the out-of-the-box default, and how hard that inertia is to overcome even by better alternatives.


Not sure who "many people" are, but no one in this thread is claiming that git is unworkable, only that it is confusing. A collaboration tool like git is highly subject to network effects. The usability delta between git and a given alternative must be very, very high before people will leave git for the alternative. Ergo, git can be both awful and "good enough" to have a majority market share (although I don't think anyone is even saying git is awful in this thread).


GoT (gameoftrees.org) did just that.

Built another tool similar but not same, using the same storage method underneath.


I use magit and it is a lifesaver.


I can still never remember the git equivalents for things like ‘hg log -r default:feature-branch’ or ‘hg diff -c <commit>’. People who haven’t used Mercurial really have no idea how pleasant a VCS interface can be.


The latter is 'git show <revision>', show being one of those fun commands that does a bazillion different things depending on what the argument is. My fun example of what's-the-git-command is 'hg cat -r <commit> <path>', whose git commit equivalent is another variant of 'git show'


Complain about Git, but there are commercial alternatives (I'm thinking of Perforce) that make even less sense.


>but there are commercial alternatives (I'm thinking of Perforce) that make even less sense.

Have you ever worked with Rational ClearCase? It's a true horror show.


ClearCase is wonderful if you fit the use case, which is a sizable team on a fast LAN.

It's great being able to change a ClearCase config file to choose a different branch of code for just a few files or directories, then instantly get that branch active for just those specific files.


I have during several occasions in the past (up to 2007), and while it is a monster, I still felt more productive than using git nowadays.


Remember, back then companies had to hire dedicated Clear case engineers to get things working properly.


You mean just like we have to reach out to IT to sort out git issues for anyone that strays outside the path?


Not really a common occurrence here.

Does it happen often at your workplace? What kind of issues are we talking about?


Basically the usual ones that end up with copying the modified files to a temporary directory and doing a fresh clone followed by a manual merge and commit, because someone messed up their local repository while trying out some git command beyond clone/pull/push/checkout/commit, and now cannot push without messing everyone's else.


Interesting; that never, ever, ever happens where I work, and most of our engineers are fresh-outs.


How often does this happen? Doing a fresh clone should be a last resort.


A couple of times per month, not everyone is a git black belt.


And there are alternatives that make better sense, too.

But the existence of something even worse doesn't excuse something that is merely bad. And git is so much more widely used that its total overall harm on developer productivity is worse.


You really only need to know a half dozen commands for basic productivity.

I started in the late 90's when cvs was popular. Then we moved to svn. You had productivity issues of all sorts, mainly with branching and merging.


Thing is most of these can simply be fixed using an alias, its not that hard to remember '-b' creates a new branch, but 'gcb' followed by the branch name I want is probably easier, however maybe someone wants 'gcob' or just 'cob', thats what Alias is for.


I don’t think those examples were meant to be an exhaustive list of git’s UI warts. There are many that are harder to remember and creating aliases and functions for each of them require building a full alternative UI. For example, how do you see all of the changes in a branch (IIRC the equivalent of ‘hg diff -b branch-name’)? How do you see the changes for just one commit (I.e., ‘hg diff -c $commit’). These things are all feasible in git, but I can never remember the incantation, so I have to Google every time. I haven’t used hg in 5 years and I still have an easier time remembering those commands.


> How do you see the changes for just one commit (I.e., ‘hg diff -c $commit’).

git show <commitish>

will show the log message and the diff from the parent commit tree.


The changes for just one commit are `git diff $commit`, while the changes for a branch are `git diff $branch`.

While there are a metric ton of things which are confusing about git, this was perhaps not the greatest example.


`git diff <commit>' is the set of changes since that commit, not the changes of that commit (the distinction between hg diff -r and hg diff -c). Similarly, `git diff <branch>' is the diff between that branch and the current HEAD, not the diff of the branch itself.

So perhaps it's a great example if you've gotten it wrong?


/u/samatman is definitely mistaken, and I think you are correct (although I'm not sure about "the set of changes since that commit"). As far as I can tell, `git diff <commit>` and `git diff <branch>` both just diff your current workspace against the commit/branch respectively.

In the case of the branch, the correct result is something like "what has changed since my branch diverged from its parent"--basically what you see in a PR. I think this is unnecessarily obscure in Git because a "branch" isn't really a branch in the git data model; rather it's something like "the tip of the branch".

I don't think I've ever wanted to compare my workspace against a branch, but clearly diffing the branch is useful (as evidenced by PRs). Similarly, I'm much less inclined to diff my workspace against a particular commit, but I often want to see the contents of an individual commit (another common operation in the Github UI).

In essence, if Github is any indicator, Git's data model is a subpar fit for the VCS use case.


That's fair, insofar as I'm unfamiliar with the mercurial commands.

It's unfair insofar as it does what I expect it to, which is to diff between what I'm curious about, and where I am.

In other words, if you elide the second argument, it defaults to wherever HEAD is.

The point being, this is not something I personally need to look up. I'd venture a guess that your familiarity with hg is interfering because the conventions are different.


That brings up a deeper issue with git's philosophy. Git's UI is largely geared to introspecting the repository history only insofar as it exists to the currently existing checkout--commands that interact with history without concerning themselves with the current checkout are far more inscrutable, confusing, and difficult to find.

By contrast, Mercurial's UI makes the repository history a more first-class citizen, and it is very easy to answer basic questions about the history of the repository itself. If you're doing any sort of source code archaeology, that functionality is far more valuable than comparing it to the current state: I don't want to know what changed since this 5-year-old patch, I want to know what this 5-year-old patch itself changed to fix an issue.


> I'd venture a guess that your familiarity with hg is interfering because the conventions are different.

Git users also need to answer questions like "What changes are in my feature branch?" (e.g., a PR) and "What changed in this commit?" (e.g., GitHub's single-commit-diff view). These aren't Mercurial-specific questions, they're applicable to all VCSes including Git, as evidenced by the (widely-used) features in GitHub.

Even with Git, I've never wanted to know how my workspace compares to another branch, nor how a given commit compares to my workspace (except when that commit is a small offset off my workspace).

> In other words, if you elide the second argument, it defaults to wherever HEAD is.

Yeah, I get that, but that's not helpful because I still need to calculate the second argument. For example, `git diff master..feature-branch` is incorrect, I want something like `git diff $(git merge-base master feature-branch)..feature-branch` (because the diff is between feature-branch and feature-branch's common ancestor with master, not with HEAD of master).

One of the cool things about Mercurial is it has standard selectors for things. `hg log -b feature-branch` will return just the log entries of the range of commits in the feature-branch (not their ancestors in master, unlike `git log feature-branch`). Similarly, `-c <commit>` always returns a single-commit range (something like <commit>^1..<commit> in git). It's this consistency and sanity in the UI that makes Mercurial so nice to work with, and which allows me to recall with better accuracy the hg commands that I used >5 years ago than the git commands that I've used in the last month.


It'd be better to not have to do these things at all, i.e. if the commands just made sense out of the box.

These are problems that every single person learning git has to figure out and then come up with their own solutions for.


I use git with no aliases and have forever. I came from SVN, and myself and the entire team I worked with at the time enjoyed the transition and had very few issues.

So much of this drama seems propped up on things that just aren't that difficult.


I'd argue that naming and usability of git is actually very-very up to the point. Naming reflects not the what you want to do on high level, but what you want to do with underlying data structure and checked out files. This could be seen as weakness or unnecessary problem for newbies, but if you work with branches or in multiuser environment, you inevitably would run in some complex and problematic conflict, and then "magical tools" would leave you with a mess, while I've yet to see problem with git repository I couldn't solve.

I've actually taught how to use git to many teams, and I always start with merkle trees. They are actually easy to grasp even for designers and other nontechnical people, and could be explained in 10 minutes with a whiteboard. And then suddenly git starts to totally make sense, and I'd dare to say, become intuitive.


> It's a shame how poor the command-line usability of the git client is.

In comparison to an average CLI program's usability, I think git's got a very good one. It's not perfect, but I think saying it's "poor" is really exaggerating the problems.

In particular, I love how well it does subcommands. You can even add a script `git-foobar` in your $PATH and use it as `git foobar`. It even works with git options automatically like `git -C $repo_dir foobar`.

> It's really hard to explain to new users why you need to run `git checkout HEAD * ` instead of `git reset` as you'd expect

Why would you ever do `git checkout HEAD * ` instead of `git reset --hard`? The only difference is that your checkout command will still leave the changes you've done to hidden files, and I can't think that's ever any good.

> why `git branch [branchname]` just switches to a branch whereas `git checkout -b [branchname]` actually creates it

If you think those behaviors should be switched, good, because they are.

EDIT: How did you manage to add the asterisk to the checkout command in your post so that it's not interpreted as italics without adding a space after it?


While I think a world where BSD would have become dominant would have thrived, things would have been different. Because of GNU existing before Linux, and it never fully adopting Linux as its kernel, Linux has always existed seperate from a specific userland. In my mind, this allowed more variety to be created on top of it (for better or worse). Moreover, Linux' license has encouraged a culture of sharing around kernel components that the BSD license did not mandate.

In an alternative timeline where BSD would be dominant, would we have e.g. free software AMD drivers? Would we have such big variation in containers, VMs, and scalabe system administration as we do on Linux? I wonder. No doubt that world would also be prettier than what we have now - in line with ways in which the BSDs are already better than Linux - but who knows.


I used to believe that absent the lawsuits that BSD would have been THE choice instead of Linux, but I think there's a lot of truth to the position that Linux was far more experimental and evolving rapidly -- and exciting! -- than FreeBSD (et all) which were busy doing things like powering the biggest web companies of the 90s (Yahoo and many more). Making waves and iterating rapidly was never going to mesh with the Unix Way (even open source Unix). As such, Linux got the mind-share of hackers, idealists, students, and startups and the rest is history.

(I think it's a pity that the useful innovations that happened in Linux cannot be moved back over to FreeBSD because of licensing -- the computing world would be better off if it could.)


Serious question, what innovations in Linux would FreeBSD even want? I honestly can't think of any.

IMO it's Linux that should want the features from FreeBSD/Solaris. I want ZFS, dtrace, SMF, and jails/zones. Linux is basically at feature parity, but the equivalents have a ton of fragmentation, weird pitfalls, and are overall half baked in comparison.

For example, eBPF is a pretty cool technology. It can do amazing things, but it requires 3rd party tooling and a lot of expertise to be useful. It's not something you can just use on any random box like dtrace to debug a production issue.


> Serious question, what innovations in Linux would FreeBSD even want? I honestly can't think of any.

systemd


But those lawsuits were well settled long before Linux saw a significant inflection point, mostly with the rise of cloud computing. For example, AWS launched EC2 in 2006 (and Android 2 years after that), 12 years after the BSD lawsuit was settled. Linux still doesn't have a desktop footprint outside of the workstation market. By contrast, Apple (well, NeXT) incorporated portions of FreeBSD and NetBSD into their operating system.

This might be a controversial opinion but: Linux likely "won" because it was better in the right areas.


Linux was already used heavily long before "cloud" computing became a coined term. Not just for cheap hosting providers either, in the early 00s Linux dominated the 500 super computers. I also remember repairing an enterprise satellite box in 2002 which ran Linux.

You're right that those law suits were settled long before Linux gained momentum though. FreeBSD and NetBSD were released after Linux and their predecessor (386BSD) is very approximately as old as Linux (work started on it long before Linux but it's first release was after Linux). As far as I can recall, 386BSD wasn't targeted by lawsuits.

Also wasn't BSD used heavily by local ISPs in the 90s?

In any case, I think Linux's success was more down to it being a "hacker" OS. People would tinker with it for fun in ways people didn't with BSD. Then those people eventually got decision making jobs and stuck with Linux because that's where their experience was. So if anything, Linux "won" not because it was "better" than BSD on a technical level but likely because it was "worse" which lead to it becoming more of a fun project to play with.


> but I've never had something as multiheaded as the linux repo.

Git is overkill for so many projects, I hate being forced into for everything.


Git is the simplest, low friction, low cost, low everything above file storage. How can there be something simpler atop an existing file system (I know there are some versioning file systems but I've never used them). I use git for practically anything I do. I to git init and I have my project versioned and I can but don't have to even add messages to each of my versions. You don't have to use anything else if you don't want to but you have so many options if you need them. You don't have to even use git on line if you don't want but if you do there are multiple (even open source) git repositories with free private repos. What is there not to like?


Mercurial is wonderfully simple, particularly for smaller teams. Also, not being able to throw away branches ensures the project maintains a history of some wrong paths that were pursued.


> How can there be something simpler atop an existing file system

Mercurial? Similar DVCS concepts, but you no longer have to worry about garbage collection or staging areas...


What garbage collection? Isn't staging area actually a feature? I've never used anything else since when I started needing something like 5 years ago git was already a recommended choice, but I also never felt like I needed anything else.


If your commits are not referenced by a branch or tag, then those are eventually committed. Having to have a branch to keep the commit around means you need to come up with a name for it if you ever want more than one name. When I go back to Mercurial, it's actually quite relieving to not have to come up with a short name to describe what the current work branch is doing, only commit messages.

And no staging area is strictly simpler than having a staging area, which is contrary to your assertion.


When would you be creating a branch to do work without knowing what work you're doing?


It's not that I don't know what work I'm doing, it's that I don't know how to give it a unique name.

Sometimes, I try a few different approaches to make something work. Each of these attempts is a different branch--I might need to revisit it, or pull stuff out of it. Good luck staring at a branch name and working out if it's landloop or landloop2 that had the most working version of the code.


Super late, but I think a good approach would be to use an issue/work tracker of really any flavor (even manually), log all of your to-do headings as issues, and then just name each branch after the associated issue/job number.


> Isn't staging area actually a feature?

I've been using git for a few years, and staging has been all cost with zero benefit so far.


"Isn't staging a feature?"

Well yes, but the GP is claiming that git is the most simple thing above file storage.

Staging may be a feature, but it adds complexity. Perhaps useful complexity, but complexity nonetheless.


mercurial with a list of different plugins for each project? no thanks


Why would you need a different list of plugins for each project?


> I know there are some versioning file systems but I've never used them

But those other systems were the whole point of the post you replied to ;)


`.git` is a directory, while in Fossil, the repo is a single SQLite file.

Not making any larger comparison here, what I'm saying is that a single file is simpler than a single directory.


care to elaborate? I fail to see how git would be considered 'overkill' for a project.


Other version control software has way simpler syntax and workflow. Subversion for example. The complexity of git makes total sense if you indeed have a complex, multi-HEADed project like the Linux kernel. But most software isn't Linux.


You literally need to know two commands to work with git in a simple project; add and commit. I don’t see how that is any complicated?


By that argument, you literally need to know a single command to work with Mercurial (hg commit) or SVN (svn commit), or hell, even CVS (cvs commit).


> "Mercurial (hg commit) or SVN (svn commit), or hell, even CVS (cvs commit)."

Why would I [re]learn those tools if I already know git?

If I'm going to move to a new VCS, it's going to be one that actually gives me something I didn't have before, like Fossil. Not some other VCS that captures the same concepts with a slightly different cli UX (which hardly even impacts me at all, since I rarely interact with such systems on the command line rather than through porcelain provided by an editor extension.)


Sure, I'm not saying they're more difficult, but people here are saying that git adds too much complexity in simple projects. It doesn't, but it lets you expand into the complexity if you ever need it in the future.


git commit -a then.


You've misinterpreted you finding git to be difficult as everyone finding it difficult, leading to an argument based on git being difficult that will never be compelling to those who didn't find learning git to be difficult. I'm one of them - I don't have any CS training - and so are the interns and new starters who use it without complaint in my workplace.

If you are forced into using it to everything but still haven't taken the steps necessary for understanding it, why is that my problem?


Git is a simple or as complex as you need it to be. And the complexity doesnt come at a cost to anyone who doesn't require it but uses git anyway.


Subversion needs a server, for one. For a single user and a single file, Git is already less overhead.


It doesn't, actually, you can host a repo on the filesystem without any sort of server process.

http://svnbook.red-bean.com/en/1.7/svn.ref.svnadmin.c.create...


I agree - I even use it for tiny personal projects that I don’t even push anywhere because you can instantly get version control with a single ‘git init .’ in a directory. It’s plenty scalable and has very little overhead...


It has such minimal overhead I don't know how you could say that.


I agree. I have reasonable familiarity with git but I find that traditional SVN-type systems often (not always) have a lower cognitive overhead.

If I ever need to manage the codebase for a huge, multi-level project involving large numbers of geographically dispersed developers then I'm sure I'd use git. For simpler projects, not so likely.


Sure. But you know what is more complex than git? Git + <anything else>.

If you use git at all, you may as well use it for everything. If you have control over which version control system to use, there's no good reason to actively use multiple ones at the same time.


I use git at work where branching matters, I also use git for home projects where git add/commit/push/pull are the only commands I use. Git is efficient at both scales, it is opposite of overkill.


Furthermore, if you're forced to use it, it's because you need it to interact with others' versions of the repo, in which case branching matters.


How is git overkill? Perhaps you're conflating Git and Github? Or perhaps you're confusing git best practices or methodologies with git functionality?

Git costs nothing to use, you add it to a project and then it sits there until you do something with it. If you want to use it as a "super save" function, it'll do that. If you want to use it to track every change to every line of code you've written, it'll do that too.


Definitely. I find SVN so much easier. But we must all use Git because cargo cultism is cool, or something.


We use Git because GitHub happened to be the first non-shitty code repository website.


I'm talking about internal repos.


I know people like to complain about git's interface, but is it so lacking to the point that it justifies the time spent on learning multiple version control systems?


Yes.


Go easy on yourself and stop forcing yourself to use the CLI tools if you dislike them so much. For every editor and IDE under the sun, there exists extensions for these version control systems that provide you with a nicer interface than the CLI interface of any VCS.

For years probably 99% of my interactions with git, or any other VCS, is through editor extensions like magit or fugitive.


How do you work on more than one thing at once with svn without manually managing patch files?


Branches.


Branches in SVN are such a pain! If I recall correctly, creating a branch in SVN consists of making a full copy of everything (remotely, usually). In Git, creating a new branch consists of creating a new pointer to an existing commit.


Yes, branches look like full copies, but they are sparse copies. So only any changed data on the branch gets actually stored in the repository.


That's basically all it is in SVN as well...

And of course it's remote, every action in SVN is remote since it's centralized (except for shelving).


imho git is not a version control system but a version control tool. when we started using git trough a system mediating all the functionality trough a goal/workflow oriented approach our whole experience radically changed.

both fork and the intellij ide are great for that, handling the common cases solidly and building up so many convenience functions I can't live without them now, like whitespace aware conflict resolution or single line commits.


Right, git sort of make the easy thing easier, and the hard things harder. I think its a large part why "stable" software branching is unpopular. The difficulty of tracking fixes against their core features over time is extremely difficult without an additional "shim" on top. Even knowing what group of commits comprise related functionality becomes difficult without layering on a commit group id (as gerrit does for example) on top. (AKA i'm looking at $file which has a set of commits for feature $Y, what are the related commits in other parts of the system required to implement this feature). Or for that matter, the ability to group a fix with its original commit (without rewritting public history) is why projects like the linux kernel implement "fixes:" tags which are then scanned by scripts/etc to detect fixes for a given commit group for back-porting functionality. Except in that case its brittle as frequently everyone involved forgets the tag.

Bottom line, git is the VSC equivalent of C, it is quite powerful, but its got a lot of foot-gun moments and requires a lot of human rigor/experience to make it work well. Rigor that even the best users frequently mess up.


The thing about fiction is that it is... fiction!

So yes, if you make the hypothesis that things went out so we would all be using BSD, then we would. And yes, successful projects and people always come from a part of luck. But so what? What happened in reality is what happened, and if they went lucky good for them, but this does not really removes anything from their achievements.


If your achievements came about by luck then how do you get off claiming credit for their success? I don’t think git was purely luck—it is a formidable tool in its own right, but there are better tools out there, and that was especially true at the time when git really took off, which is to say when GitHub began to be popular.


> If your achievements came about by luck then how do you get off claiming credit for their success?

The same way it doesn't stop gamblers, stock pickers, actors, and entrepreneurs from mistaking survivorship bias for talent.

That said, I don't think git was purely luck either.


what if google built androd on top of BSD?


Almost there. The Linux kernel is the only GPL piece still standing.


wouldn't change a thing. Apple built on freebsd; Android userland and libc is all BSD


Slight clarification, Darwin is a mish-mash of CMU's Mach microkernel, some 4.3 BSD, some BSD userland, some GNU userland (although that seems to be going away), and then NeXT/Apple stuff on top of that.


I'd say a big part was attitude toward common hardware and existing common setups.

There were a couple of times early on when I wanted to try both Linux and one of the BSDs on my PC. I had CDs of both.

With Linux, I just booted from a Linux boot floppy with my Linux install CD in the CD-ROM drive, and ran the installation.

With BSD...it could not find the drive because I had an IDE CD-ROM and it only supported SCSI. I asked on some BSD forums or mailing lists or newsgroups where BSD developers hang out about IDE support, and was told that IDE is junk and no one would put an IDE CD-ROM in their server, so there was no interest in supporting it on BSD.

I was quite willing to concede that SCSI was superior to IDE. Heck, I worked at a SCSI consulting company that did a lot of work for NCR Microelectronics. I wrote NCR's reference SCSI drivers for their chips for DOS, Windows, Netware, OS/2, and Netware. I wrote the majority of the code in the SCSI BIOS that NCR licensed to various PC makers. I was quite thoroughly sold on the benefits of SCSI, and my hard disks were all SCSI.

But not for a sporadically used CD-ROM. At the time, SCSI CD-ROMs where about 4x as expensive as IDE CD-ROMs. So what if IDE was slower than SCSI or had higher overhead? The fastest CD-ROM drives still had maximum data rates well under what IDE could easily handle. If all you are going to use the CD-ROM for is installing the OS, and occasionally importing big data sets to disk, then it makes no sense to spring for an expensive SCSI CD-ROM. This is true on both desktops and servers.

The second problem I ran into when I wanted to try BSD is that it did not want to share a hard disk with a previous DOS/Windows installation. It insisted on being given a disk upon which it could completely repartition. I seem to recall that it would be OK if I left free space on that disk, and then added DOS/Windows after installing BSD.

Linux, on the other hand, was happy to come second after my existing DOS/Windows. It was happy to adjust the existing partition map to turn the unpartitioned space outside my DOS/Windows partition into a couple Linux partitions and install there.

As with the IDE thing, the reasons I got from the BSD people for not supporting installing second were unconvincing. The issue was drive geometry mapping. Once upon a time, when everything used the BIOS to talk to the disk, sectors where specified by giving their actual physical location, specifying what cylinder they were on (C), which head to get the right platter (H), and on the track that C and H specifies, which sector it is (S). This was commonly called a CHS address.

There were limits on the max values of C, H, and S, and when disks became available that had sectors whose CHS address would exceed those limits, a hack was employed. The BIOS would lie to the OS about the actual disk geometry. For example, suppose the disk had more heads than would fit in the H field of a BIOS disk request. The BIOS might report to the OS that the disk only has half that number of heads, and balance that out by reporting twice as many cylinders as it really has. It can then tranlate between this made up geometry that the OS thinks the disk is using and the actual geometry of the real disk. For disks on interfaces that don't even have the concept of CHS, such as SCSI which uses a simple block number addressing scheme, the BIOS would still make up a geometry so that BIOS clients could use CHS addressing.

If you have multiple operating systems sharing the disk, some using the BIOS for their I/O, and some not, they all really should be aware of that made up geometry, even if they don't use it themselves, to make sure that they all agree on which parts of the disk belong to which operating systems.

Fortunately, it turns out that DOS partitioning had some restrictions on alignment and size, and other OSes tended to follow those same restrictions for compatibility, and you could almost always look at an existing partition scheme and figure out from the sizes and positions of the existing partitions either what CHS to real sector mapping the partition maker was using. Details on doing this were includes in the SCSI-2 Common Access Method ANSI standard. The people who did Linux's SCSI stuff have a version [1].

I said "almost always" above. In practice, I never ran into a system formatted and partitioned by DOS/Windows for which it gave a virtual geometry that did not work fine for installing other systems for dual boot. But this remote possibility that somehow one might have an existing partitioning scheme that would get trashed due to a geometry mismatch was enough for the BSD people to say no to installing second to DOS/Windows.

In short, with Linux there was a good chance an existing DOS/Windows user could fairly painlessly try Linux without needing new hardware and without touching their DOS/Windows stuff. With BSD, a large fraction would need new hardware and/or be willing to trash their existing DOS/Windows installation.

By the time the BSD people realized they really should be supporting IDE CD-ROM and get along with prior DOS/Windows on the same disk, Linux was way ahead.

[1] https://github.com/torvalds/linux/blob/master/drivers/scsi/s...


That mostly matches my experience. I was following 386BSD's progress at that time and was really eager to try it for myself. However, the machines that it was targeting (SCSI disk of ~200MB, math coprocessor) were out of my reach. It made sense that a workstation-like OS was expecting workstation-class hardware, but it did rule out most 386 PCs that people actually owned.

However, I also agree with @wbl that the lawsuits were ultimately the decisive factor. The hardware requirements situation of BSD was a tractable problem; it just needed a flurry of helping hands to build drivers for the wide cacophony of PC hardware. The lawsuit-era stalled the project at just that critical point. By the time that FreeBSD was approaching an acceptable level of hardware support Linux already had opened up a lead... which it never gave up.


Hm. First thank you for writing the NCR-BIOS. It never let me down while deploying about 200 of them for SMBs. I had Adaptecs at the same time which were annoying to integrate. And there is the thing, from my point of view Adaptec did things differently while setting the pseudo-standard in the PC-world. There was this group-think that if SCSI then Adaptec, which i never understood, because they could be underwhelmingly fiddly to integrate and were expensive.

As to the C/H/S low-level-format, NCR could read some Adaptec formatted drives, while Adaptec couldn't read NCRs. Asshole move. Never mind.

As for the BSDs being behind? Not all the times. I had an Athlon XP 1800+ slightly overclocked by about 100Mhz to 2000+ in some cheap board for which i managed to get 3x 512MB so called 'virtual channel memory' because dealer thought it was cheap memory which ran only with via chipsets. Anyways 1,5GB RAM about twenty years ago was a LOT! With Linux of the times i needed to decide how to split it up, or even recompile the kernel to have it using it at all. No real problem because i was used to it, and it wasn't the large mess it is today.

Tried NetBSD. From a two or three floppy install set. I don't remember the exact text in the boot console anymore, just that i sat there dumbstruck because it just initialized it at once without further hassle. These are the moments which make you smile! So i switched my main 'workstation' from Gentoo to NetBSD for a few years, and had everything i needed, fast and rock solid in spite of overclocking and some cheap board from i can't even remember who anymore. But its BIOS had NCR support for ROM-less add-on controllers built in. Good times :-)

Regarding the CD-ROM situation, even then some old 4x Plextor performed better than 20x Mimikazeshredmybitz if you wanted to have a reliable copy.

As to sharing of Disks by different OS? Always bad practice. I really liked my hot-pluggable 5 1/4" mounting frames which took 3,5" drives, with SCSI-ID, termination, and what not. About 30 to 40USD per piece at the time.


And Microsoft at the time wasn't serious about POSIX.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: