Hacker News new | past | comments | ask | show | jobs | submit login
What does a PGP signature on a Git commit prove? (kernel.org)
147 points by JNRowe on March 30, 2021 | hide | past | favorite | 73 comments



> The difference between signed tags and signed commits is minimal — in the case of commit signing, the signature goes into the commit object itself. It is generally a good practice to PGP-sign commits

Note: Linus says otherwise, the two are very different: http://git.661346.n2.nabble.com/GPG-signing-for-git-commit-t...

One of the most important reasons it doesn't make sense to sign commits is that keys expire so when you sign something you implicitly say it is valid at most until the key is valid; nothing can be guaranteed after that. It is easy enough to re-sign a tag (even automatically) but you can't re-sign a commit without changing its identity and that defeats the whole purpose of git.

The only reason you'd want to sign commits is to defend from malicious maintainers ie if your patch is changed before being merged, or your authorship is removed, or a patch impersonating you is merged. It's probably better to sign your patches (ie the temporary moment where you interact) in that case: I see there was some work in that direction (https://lwn.net/Articles/813646/) but apparently it hasn't caught up (https://lore.kernel.org/signatures/ shows last signatures end of 2020)


The trick is to consider the key invalid for use in signing after it expires, but still valid for verification after that time. Signatures produced by that key can be considered valid well after the key itself has expired. For that to work, you need to have some confidence that the signature was produced at a given time, which is what third-party time stamping authorities (TSA) do. You send your signature to a TSA, which validates the signature, and that the key is valid at the current time, and the TSA counter signs.


For a large project with multiple contributors I wouldn't want to sign the whole tree at a particular point in time (which seems like the natural interpretation of a signed tag) unless I'd actually reviewed it all personally (which, sure, maybe some maintainers do). The commit is the part that I made, so that's the part that I want to sign (and of course it contains a secure reference to a parent commit, but a reader would naturally understand that I'm not vouching for the whole repo history). And I'd far rather publish my contributions as commits on my branches (that then become a PR) than email patches around and worry about how to archive that, personally.


The problem is that your signature is associated to the commit so it does make you vouch for history, and it lives on after your PR is merged. If you're doing a PR there's a chance that you are authenticated so that is what guarantees the changes do come from you and are not modified (your PR software should log all changes to the PR). Worst case your PR itself can be PGP-signed to say "I would like branch XXX currently at commit YYY be merged onto master currently at ZZZ" but I totally agree it would be a heavy process and signing commits does the job


> The problem is that your signature is associated to the commit so it does make you vouch for history, and it lives on after your PR is merged.

That's exactly what I want though. I want anyone looking through history to be able to see that I signed that specific diff in that context (something weaker than the whole state of the tree at that point in time - but something stronger than the textual diff with no other context). The fact that rewriting history will break the signature is a feature not a bug, but I'd like the patch to carry my signature even after it's merged.


I'm casually learning about this stuff, so please forgive my noob questions. My end-goal is to enable laypersons like me to verify the authenticity and provenance of digital communication.

I see that Ryabitsev also announced a git transparency log for the kernel:

https://people.kernel.org/monsieuricon/introducing-the-kerne...

This is very cool.

You wrote:

> ...it is valid at most until the key is valid

I don't follow. Do you mean while the key is valid?

I think the scenario you're describing is:

  a) I'm issued a cert. 
  b) I create (and register) my PGP key. 
  c) I sign a commit with my key. 
  d) My cert is revoked. 
  e) Somehow my key is expired. 
  f) Somehow my commits are now flagged.
Would your concern be addressed by having transparency logs for both the certs and keys?


Sorry I did mean valid while the key is valid, of course.

The scenario I'm describing is simpler:

- You create a PGP key

- You sign a commit with that key

- The key expires (you definitely should have a key rotation process in-place, so keys should expire)

- I want to use your commit. I see it is signed but the key is expired. What is the value of the signature at this moment ?

When a key is expired your commits don't automatically become bad, they just become "unverifiable" but you can't resign them without changing their hash; that's why signing tags is better because you can re-sign tags quickly (before the key is expired).

> Would your concern be addressed by having transparency logs for both the certs and keys?

Transparency logs only catch the problem after they appear and raise the chance that they are detected but they don't prevent it from happening. If we're talking about making sure no one changes your patches/commits under your feets, a transparency log of commits is "enough", and that's basically what the git repository is all about.


Why is this a problem, the key was valid however at the time the commit was made?

> https://lwn.net/Articles/813646/

Huh? Replace one convoluted process for another convoluted and less capable one? Seems like not invented here syndrome. Not surprised it's not used. If you're using email for critical, sensitive work, spend the 15 minutes to learn how PGP works and another to set it up. You can even sign your git commits and tags with them!

Also, get some yubikeys, they work well with pgp.


I think you need 15 minutes to understand what the problem is about. An email-based workflow like the one the linux tree uses is based on patches, not commits. You may want to sign commits but it will be useless because commits aren't exchanged. So the proposed process doesn't "replace" anything: there's no signing in the first place.

You can't sign patches directly because then they wouldn't be usable by the git tools like git-am. So you need a meta data-structure that contains all patches and relevant information kinda like a pseudo-commit: the author, message and patch itself. Then that meta data-structure can be pgp-encrypted and sent by email, as described by your link and as you implied was not done

> Why is this a problem, the key was valid however at the time the commit was made?

You want to check the validity of a commit at the moment you use it, not the moment the commit was made: imagine I put up a fake tree, with a fake commit with you as an author but made with a fake key that already expired. What's the use of that signature ? Can a third-party cherry-pick that commit and trusting that "the key was valid at the time the commit was made" ? You can sign that you did something at some point but you can't sign that you never did something, or that you never used a given key.

> spend the 15 minutes to learn how PGP works and another to set it up

You can't seriously believe that it takes 15 minutes to understand how PGP works and 15 minutes to set up a functional environment and expect it to follow the best standards. Even reading the kernel's documentation on how to use pgp (https://www.kernel.org/doc/html/v5.1/process/maintainer-pgp-...) takes more than 15 minutes.

> Also, get some yubikeys, they work well with pgp.

I don't understand what's the link with the rest of the discussion


> You want to check the validity of a commit at the moment you use it, not the moment the commit was made

If you received the repo before the key expired, you can still trust the signature even if the key has since expired, as the key was valid at that time the repo was signed.

> imagine I put up a fake tree, with a fake commit with you as an author but made with a fake key that already expired

Yes, you cant trust a key after after it's expiration. See bottom of https://www.kernel.org/signature.html for how the kernel.org folks handled this in 2011. Once they re-signed the release, it's safe to trust the compromised key within that release.

> I don't understand what's the link with the rest of the discussion

We're discussing complexity of pgp adoption, security cards simplify the usage of pgp, thought it was relevant.


I'm thinking more like the recent PHP repository compromise. I was thinking at some point that I may want to host my own git public repos, but they might not be safe. My solution could be to ask people to check my signatures.


I haven't read a lot about that issue but if I understand correctly a malicious actor had write access to the repo, and yes signing tags could have prevented this push to be problematic (I don't know if they're signing content or not). It's always good to check signatures especially when we have the tools to do that automatically


> keys expire

Only if you want them to expire. They don't have to.


Given enough time, every secret becomes public. You really shouldn't expect your keys to be secure forever; because of that you definitely should have a form of key rotation in place.


A key can be involuntarily "expired" if the private key was leaked.


That's revocation.


Totally though this was going to be some snarky article claiming they're worthless. Refreshing to see the article just ernestly answers the headline.

> To my knowledge, there are no effective attacks against sha1 as used by git

Perhaps im missing something, but wouldn't a chosen prefix collision be relavent here? I imagine the real reason is that cost to pull it off for sha1 is somewhere in the $10,000-$100,000 range (but getting cheaper every year) which is lots of $$$ to attack something without an obvious attack scenario that can justify it.


So the big problem with Git and SHA1 is that in many cases you are giving full control to an untrusted third party over the sha1 hash of something. For example, if you merge a binary file, it'd be quite easy for them to generate two different versions of the same file, with the same SHA1 digest, and then use the second version to cause problems in the future. You may also be able to modify a text file in the same way without getting noticed in review (I'm not up to speed on how advanced sha1 collision techniques are now).

Similarly, the git commits you merge themselves could have that done - the actual git commit serialization gives you a fair bit of ability to append stuff to it that isn't shown in the UI. That wouldn't affect the signed git commits. But it's still dubious to have the ability to change old history in a checkout.

Anyway, Git is apparently moving towards SHA256 support, so hopefully this problem will be fixed soon: https://lwn.net/Articles/823352/


> it'd be quite easy for them to generate two different versions of the same file

Citation needed. When SHA1 was cracked, it cost $110k worth of cloud computing. And there was some restriction on the two files which matched checksums. IIRC it was like the Birthday Paradox — you don’t pick one and find another sharing the same match, but you generate billions of mutations of similar binaries and statistically two would have the same checksum.

Not exactly easy, fast, cheap, or work with all use cases.



That was a later attack, not the original. Attacks get cheaper with time.


Later versions of the attack were much cheaper and also chosen prefix collisions which are much less restrictive in the requirements on the file.


> And there was some restriction on the two files which matched checksums.

I know. That's why I mentioned the difference in difficulty between getting a collision in binary files and git commit objects themselves, and textual sourcecode.


That nonce value could be ±\0 or 5,621,964,321e100; though for well-designed cryptographic hash functions it's far less likely that - at maximum difficulty - a low nonce value will result in a hash collision.


? How are nonces even involved in this?


Searching for the value to prepend or append that causes a hash collision is exactly the same as finding a nonce value at maximum difficulty (not less than the difficulty value, exactly equal to the target hash).

Mutate and check.


The term nonce has a broader meaning than how it is used in bitcoin. (Edit: i reworded this sentence from what i originally had)

That said, no, finding a collision and finding a preimage are very different things, and well the collision attacks on sha1 will involve a lot of guessing and checking, they are not generic birthday or bruteforce attacks but rely on weaknesses in sha-1 to be practical. They also do not make preimage attacks practical.


Brute forcing to find `hash(data_1+nonce) == hash(data_0)` differs very little from ``hash(data_1+nonce) < difficulty_level`. Write each and compare the cost/fitness/survival functions.

If the hash function is reversible - as may be discovered through e.g. mutation and selection - that would help find hashes that are equal and maybe also less than.

Practically, there are "rainbow tables" for very many combinations of primes and stacked transforms: it's not necessary to search the whole space for simple collisions and may not be necessary for preimages; we don't know and it's just a matter of time. "Collision attack" https://en.wikipedia.org/wiki/Collision_attack

Crytographic nonce > hashing: https://en.wikipedia.org/wiki/Cryptographic_nonce#Hashing


> Brute forcing

The attack being discussed is not a brute force attack (or not purely). If the best attack on sha1 was bruteforce than we would still be using it.

> to find `hash(data_1+nonce) == hash(data_0)` differs very little from ``hash(data_1+nonce) < difficulty_level`.

Neither of those are collision attacks (assuming you dont control the data variable). The first is a second pre-image and the second (with equality) would be a normal preimage.

The attack for sha1 under discussion (chosen prefix collision) is finding hash(a+b) == hash(c+d) where you control b and d (but not neccesarily a and c)

> Practically, there are "rainbow tables" for very many combinations of primes and stacked transforms:

What do primes or rainbow tables have to do with any of this? Primes especially. Rainbow tables are at least related to reversing hashes, if totally irrelavent to the subject at hand, but how did you get to primes?


Practically, iff browsers still relied upon SHA-1 to fingerprint and pin and verify certificates instead of the actual chain, and there were no file size limits on x.509 certificates, some fields in a cert (e.g. CommonName and SAN) would be chosen and other fields would then potentially be nonce.

In context to finding a valid cert with a known good hash fingerprint, how many prime keypairs could there be to precompute and cache/memoize when brute forcing.

"SHA-1 > Cryptanalysis and validation " does list chosen prefix collision as one of many weaknesses now identified in SHA-1: https://en.wikipedia.org/wiki/SHA-1#Cryptanalysis_and_valida...

This from 2008 re: the 200 PS3s it took to generate a rogue CA cert with a considered-valid MD5 hash: https://hackaday.com/2008/12/30/25c3-hackers-completely-brea...

... Was just discussing e.g. frankencerts the other day: https://news.ycombinator.com/item?id=26605647


'quite easy' is relative, and just because you can find some file that matches the hash, there is also a very high likelihood that the file will be useless. The chances of generating a USEFUL file(where useful means it does something, like compile, nefarious or not) and causes a SHA1 hash collision are very low... currently.

So to get a nefarious file that puts in an exploit or something useful for an attacker AND causes a SHA1 collision is a very very high bar to meet, currently.

Hopefully they replace SHA1 with SHA256/etc before the capability becomes feasible.


Sort of. This is really tricky to achieve with text files (where the padding necessary to make the hashes line up looks completely out of place), but many binary formats allow arbitrary padding — e.g. zip files have an index at the end. Inversely, png files have a header and ignore meaningless data at the end. Both can be padded with arbitrary data (and, indeed, you can cat a png and a zip together as a janky steganography system)


For text files, tools for brute forcing a git commit hash [1] work by messing with the author and committer timestamps.

Of course, that only gives you about 40 bits to play with (assuming the further you move the timestamp, the easier it is to detect) so it wouldn't be completely sufficient on its own.

[1] https://github.com/DesWurstes/VanityCommit#alternatives


I agree in binary formats, it's easier to accomplish, but it by no means is easy.

Thankfully most source code is plain text. There are mostly trivial ways to encode information in whitespace/etc, but again, the ability to carry some useful information AND commit a SHA1 collision is currently non-trivial. Plus Git has specifically put in some hardening for the existing SHA1 hashes to make this even less feasible in the particular application of a git repo.

A pipe wrench to the back of the head(or social engineering) of some random committer is probably a LOT easier to accomplish.

That said, if your primary nemesis is one the giant nation states(US, China, Russia, Etc) AND you were very high target for them, then all bets are off, as they can pretty much accomplish whatever they want against any individual/small group on the planet. I would imagine most of us are not even remotely in this situation.


I have thought about this in regards to Nix but as far as I could conceive, it seemed that:

a) you can definitively generate content that will be hashed to the same value - this is a given since anything that compresses a source into a much shorter representation allows that by definition (lets ignore feasibility/time/cost)

b) Making a) in a way that the contents are usable for a given purpose, doesn't introduce severe noticeable changes, and plays ball with the remaining stuff (OS, build chains, the context where it's included in, etc) seems to be the actual difficult part

So my conclusion was at the time that in terms of feasibility it's probably easier to hack the chain itself replacing the hash. But again, if there's a master "hash" for whatever artefact is produced that relies on all those hashes it would also need to be replaced?


I dont think this is true.

For a chosen prefix collision you need to be able to hide a few bytes of binary data in the file. Sometimes that is prohibitive but in most file formats that is do-able.


OK, let's assume what you say is true. You spent the $100k or whatever to compute a SHA1 hash collision and you can replace the bits 11011 with the bits 1010. How is this useful? It *MIGHT* be useful, but chances are it's not very useful, as lots of data formats are pretty lossy and a random bitflip here or there doesn't matter that much.

Just because you have a SHA1 collision doesn't mean you can put any bits wherever you want and still have it be a SHA1 collision.

The really hard part is being able to flip whatever bits you want(to achieve some goal, like a RCE or whatever) AND make it a SHA1 collision. This is currently very much an open problem, nobody has been able to demonstrate(that we are aware of).


> For example, if you merge a binary file, it'd be quite easy for them to generate two different versions of the same file, with the same SHA1 digest, and then use the second version to cause problems in the future.

I think the last time I saw this come up the response was that git will just ignore your attempt to merge a new file if the checksum is the same. So you would have to compromise the repo server directly to replace it.


Git migrated to a hardened variant of SHA-1 back in v2.13. https://github.blog/2017-05-10-git-2-13-has-been-released/#s...


> If we cared to, we could walk each commit all the way back to the beginning of Linux git history, but we don't need to do that — verifying the checksum of the latest commit is sufficient to provide us all the necessary assurances about the entire history of that tree.

Doesn't this imply a tremendous amount of trust in the signer? It sounds like it's only a guarantee about history if every commit was signed.


It's not a matter of trusting the signer. Every commit includes the hash of its parent, and thus the combined hashes of all its ancestors. Signing just the last commits signs the entire chain.


Right... but where did the signer get the previous commits?


Oh, I see what you mean. But a signed commit doesn’t imply that the signer vouches for the code or it’s history (unless they say they do, but then you have to believe them that they reviewed the entire history). What it does do is to guarantee that after a commit is signed, nobody can come in and retroactively alter the history of the repo.


That's true. I guess we were approaching worries from opposite ends.

* Did it make it to the signer intact?

* Did it make it from the signer the signer intact?

And a single signed commit is good for the second question and only (which is still a big step up from no signatures!)


What if there are 5 unsigned consecutive commits somewhere in the middle of the chain? Or is the idea that you still can't alter anything in that unsigned section without altering the signed commits at the ends?


Yes, that’s exactly the idea


SHA1 wasn’t meant to be a security measure but as far as I know no one has been able to generate a collision that could fool a code review.

> Just replacing an object in a repository is not enough; the attacker would have to find a way to distribute that object to other repositories around the world, which is not an easy task. The colliding object would have to function as C source (if the kernel were the target of attack here), and would have to stand up to a casual inspection — it would have to look like proper kernel source. https://lwn.net/Articles/715716/


Regardless of whether or not SHA1 was meant to be a security measure, it should have been. Linus's original thinking re: git security is much less useful than just having a secure hash function that you can rely on.


> SHA1 wasn’t meant to be a security measure

It literally stands for Secure Hashing Algorithm.


presumably the mean that the use of SHA1 in git wasn't meant to be a security measure.


There's a cautionary note at the end of the article reminding the reader that git still relies on sha1 (with mitigations). Recall that $10k worth of GTX1060's produced a sha1 collision in late 2019.

https://www.usenix.org/conference/usenixsecurity20/presentat...


Shameless plug: you can mostly(1) solve the problem of Git PGP signatures becoming unverifiable due to key compromise with my OpenTimestamps software: https://petertodd.org/2016/opentimestamps-git-integration

tl;dr: it proves data existed in the past. In the case of a PGP signature on a Git commit, that can prove the signature (and the repo contents) were created prior to the key being compromised.

1) Mostly, because sometimes you don't know when the key was compromised.


We have a similar approach planned in sigstore, but with a slightly different approach to the timestamps by using Transparency Logs. We have some demos here on how to sign git commits: https://github.com/sigstore/cosign/blob/main/FUN.md

The idea is that you can use short-lived keys, bound to certificates via an ACME-style challenge, but based on an email address. The signature goes into a Transparency log to prove it happened while the cert was valid. Then revocation is no longer an issue.


So a big problem with that is that the long term durability of transparency logs is doubtful. Being a record of every SSL cert ever issued, they're an enormous amount of data, and I won't be surprised if actually getting that data in the future becomes hard.

OTS just depends on the Bitcoin block headers (megabytes/year), and the databases of timestamps maintained by the public calenders, (a few GB/year; it's new entry per calendar per second). It's a more easier to archive a few GB of data than the tens of terabytes in the public CT logs.


Peter, OpenTimestamps are great. Is there a plan to get the raw ots file for embedding in OpenPGP signatures (as notation data) so that I could be sure that the signature was made at given time? (Just like RFC 3161 - style timestamping data).


Nice to see a non-bullshit application for blockchains :)


Note that OpenTimestamps itself doesn't actually use a blockchain. But while the protocol supports multiple time attestation mechanisms, almost everyone using OTS uses it in conjunction with Bitcoin (the public calendars all timestamp via Bitcoin). Thus almost all OTS timestamps are depending on the Bitcoin blockchain for their timestamp security.


Is there a high-to-low level design document to read on how to verify the timestamp? A simple OP_RETURN + hash output in a transaction is dead simple to verify (just get the transaction from any place, see the output, check the timestamp). I'd like to compare OTS with it.


Not really. But if you understand the basics of how OTS works, it's fairly easy to figure out how to do that with the `ots info` command. Basically, it just prints out the raw commitment operations the timestamp proof performs, letting you follow along.

Note that OTS proofs do not have any notion of a transaction in them. Rather, they perform commitment operations that end up at a merkle root of a Bitcoin block (at least in the 99.9999% of proofs that use Bitcoin). With the `--no-bitcoin` option, the OTS client will tell you what block # to look for, and what the merkle should be for the proof to be valid.


Something that is not discussed: to verify the signature you need a copy of Linus's public key, but how do you know you have his actual public key? If you downloaded the key from some server somewhere, and that server was compromised, it could fool you into thinking the signature is valid, which undermines the whole thing.

One way to know the key is valid is if you meet Linus himself and verify it first hand (like when you verify Whatsapp keys, which you do, right?). But not everyone can do that. PGP implements a "web of trust" security model. That means if someone you trust has signed Linus's key, you can verify his key via their signature. This extends beyond one hop; it's up to you how much you trust it based on its signatures.

This is in contrast to centralised systems like SSL/TLS where you have no choice but to trust entities like Microsoft, Google, Verisign etc.


> Kernel.org web of trust PGP keys used by members of kernel.org are cross-signed by other members of the Linux kernel development community (and, frequently, by many other people). If you wanted to verify the validity of any key belonging to a member of kernel.org, you could review the list of signatures on their public key and then make a decision whether you trust that key or not. See the Wikipedia article on the subject of the Web of Trust.

https://www.kernel.org/signature.html

Looks like the kernel.org folks actually use the web of trust model.


Yeah, they do. There's a video of Linus talking from many years ago about git where he says web of trust is the only way to do security.


In practice, most people don't care if they have a copy of Linus's public key. They only care if they have a copy that is sufficiently likely to be Linus's public key.

This is the reason why the web of trust has, by and large, failed to reach any noteworthy amount of adoption. The web/operating system PKI is good enough for most purposes. Unless your usage scenario involves a massively critical government agency with good reason to be actually paranoid, "good enough" for the web means "good enough" to deliver a public key for the Linux kernel. Or more likely, "good enough" to deliver an installation image for the distribution of your choice that is theoretically signed with PGP/something actually reasonable, but whose signature you won't check anyway because the web PKI is, in fact, good enough.

The distribution of your choice may then possibly have verified the source of the public key, but that's so far upstream of you that, quite honestly, you have no way of checking anyway. Even if you bothered checking this being checked upstream, there are tons of other critical system components for which you would have to repeat this.


> With git, a cryptographic signature on a commit provides strong integrity guarantees of the entire history of that branch going backwards, including all metadata and all contents of the repository, all the way back to the initial commit.

The purpose of a signature is to add authenticity assurances on top of integrity guarantees (which are necessary to authenticity, but independent). A document is signed by producing a digest of it, and that digest provides an integrity guarantee. The signature of the digest is then an additional authenticity guarantee.

"Integrity guarantees" means that it is vanishingly unlikely that someone can make a maliciously altered clone of that repository, including its history, all the way to the initial commit, including that signature.

The git hashes already provide pretty strong integrity guarantees of this sort. We can be confident that someone re-creating a fake history cannot end up at the same baseline at the HEAD, with the same commit hash. The strength of that confidence could be increased simply by adding additional, stronger hashes as part of a commit's content.

In terms of authenticity, the signature provides confidence (not a "guarantee") that whoever signed that commit held some beliefs about the repository, sufficient to want to put their signature to it

Someone could be duped into signing a corrupted repository, believing it to be genuine, in which case their beliefs were wrong. (And do not necessarily match their present beliefs!)

Someone could be coerced into signing corrupted repository, in which case their beliefs at the time of signing are that the repository is malicious, and that that they will be harmed if they don't comply.

Someone's private key could be compromised, so that an unauthorized agent perpetrates their signature without their knowledge or consent.

An authenticity assurance is not as easily quantifiable as an integrity guarantee. The integrity guarantee rests in the digest algorithm (how difficult is it to produce a document matching a given digest), but authencity assurances involve people problems.


I think the most valuable aspect of signed commits is that of also signing the contributor license agreement with the same key. This is the Apache Source Foundation's policy.


Are there alternatives to PGP for commit/tag signing?


Not really. There are utilities that sign things (e.g. Signify) but they don't attempt to solve the difficult problem of identity management.

Also, OpenPGP is an open published standard with an extensive infrastructure of implementations. It's hard to overcome that with a new proposal.


That the committer hopes you believe they manage their PGP private key better than they manage their Git account security?


You know anyone can push a commit "authored" by any GitHub user? And the GitHub UI will happily show as that user

Gpg signature's are the only way of having any verification at all on who wrote a commit


Git doesn't have accounts. Its a program you run on your computer.


Which “git account”? Signing means you don’t need to depend on the git mirrors security to ensure you know who made a commit.


To the above commenters, Git is also used in enterprise scenarios with private, semi-private repos that enforce corporate user auth.

My initial comment was dumb and deserved downvotes, but all git usage isn't the git usage you know of.


> Obligatory note: sha1 is not considered sufficiently strong for hashing purposes these days, and this is widely acknowledged by the git development community. Significant efforts are under way to migrate git to stronger cryptographic hashes

Ops, forgot to add a tag to your hash algorithm?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: