Yes. The real libtiff people are trying to resolve that with the domain owner; he seems to be at least a little bit more cooperative than the libtiff.org one.
Seems like we've had organizations that house ideas for the public good for centuries now.
Why not involve libraries in the effort? They could keep a "master" copy and github.com, gitlab.org, SF.com, or whoever else comes next can host development versions.
They only do archiving. Hosting projects that are under development is not in their mission. https://www.linuxfoundation.org/ looks like a better match.
There is a need for an organization that is willing to host established but small and unfunded projects like libtiff, libpng, zlib, etc., and give them some minimal organizational backing in the interest of continuity (so no github). Somewhat similar to, but more general than, what the Network Time Foundation is doing for ntp and related projects.
I'm not saying you should do it or that it's the best option, but putting your code under the umbrella of something like the FSF or the Apache foundation comes with this sort of benefits.
I love the Internet Archive, so it would be great if some kind of archiving function could be worked out. But if it's on GitHub now, it should be exported to Zenodo today, so there's at least one reference version around for Internet Archive to backup.
To my knowledge this was the original iPhone jailbreak (JailbreakMe). I was on IRC whilst the author was seeking donations before releasing it ;) Ran it on my trusty iPod Touch 1g only minutes after it was released.
This was for iOS 1.1 though, not iOS 1.0, as it was not encrypted and was never really locked down. It was Apple's choice to release iOS 1.0 without the security enabled that made further jailbreaking efforts much more straight-forward, because there was extensive knowledge of the file system.
This was all well before there was an App Store, and when Apple's public position was that they'd never allow third-party software on their platform ;)
A bit tangential, but I just noticed that in the Arch Linux repos, `libtiff` is version 4.0.6, but there's another package called `libtiff4`, which is version 3.9.7. I don't suppose anybody here might know the rationale behind this bizarre version/naming paradigm?
It doesn't seem to be actively used in the past few years, though; isn't there a process one can use to claim unused github user / org names? Might be worth talking to GitHub staff about...
Nah, sunsite is a noob, along with Slackware. When you wanted to download boot/root, Tamu, or SLS you went to tsx-11.mit.edu or ftp.funet.fi to get it.
If you have a maintainer who signs their commits, you trust their signatures and call it a day. Go with the latest commit and that tells you the history that they have cryptographically claimed is "correct".
One problem I see there is that, with git, you still have to take care that you keep around all the commits that at any given point in time were declared "correct".
For example, if a commit is amended and force pushed out to remove a backdoor from a popular package, a site interested in the history of the library will want to ensure that the faulty commit stays around.
I'm not even sure that is 100% possible, but it, at least, will take some careful git configuration.
You never rewrite published history with git. It's fine to force push signed git commits to a random branch that are going to be rewritten -- that's just how development works. But when you merge something, it stays that way. Touching history in release branches is something that will always end in tears (all of the tagging breaks, everyone's clone will complain when they update it, etc). Just bad news all around.
As for a site which cares about archiving, they can almost certainly add a tag for every commit they slurp up (making sure it doesn't get hit by the git gc). Though, I'm fairly sure that you can just configure your git instance to never garbage collect.
Such problems are always theoretical, until they suddenly become real, and you're happy that someone years ago has thought of that possibility, and left behind a way to solve them...
Yeah. It's still on CVS. May as well throw it up on GitHub like every other open source project. That doesn't have to be its only home on the web, but at least it would always be available there. The main open source project I work on (Pywikibot) isn't primarily hosted on GitHub, but it is mirrored there.
Any distro that complies with the GPL will generalise its requirements to everything else, and thus host original sources for every single package. :-)
Tiff? That is a format I haven't heard in a long time.
edit: to everyone replying. TIFF is awful. We have lossless compression now (read PNG) which is at least a billion times better. We shouldn't use it for anything in this day and age. Hell, just DEFLATE your TIFF and call it a new format. It will be better than TIFF
TIFF is one of the most open and flexible file formats out there.
PNG offers only a tiny subset of what's possible with TIFF. Note that TIFF supports multiple compression formats, lossless and lossy, multiple sample formats and sample counts, very flexible organisation of data layout etc. PNG offers a small number of sample formats, fixed layout and compression, and none of the higher-level TIFF features.
PNG is simple and easy to use. But "better" is subjective and by most objective measures it's inferior to TIFF.
TIFF is typically used as an intermediate uncompressed memory format for almost everything in images. Your png library is probably uncompressing png images into TIFF images internally. Your printer probably translates whatever is given to it into a TIFF image to print, your scanners probably use TIFF as a raw format to translate into something else. Even camera RAW images are probably using TIFF in some way or another.
Tiff is very useful because it lets you do almost anything. A tiff image is just several arrays of numbers, you decide how many bits, signed, unsigned, float, int, etc.
This is very helpful for processing which treats each pixel as a sample of the scene, such as computational microscopy or remote sensing. I've always wondered if that had something to do with why it was hosted at remotesensing.org.
See my edit. Also, when you are doing image processing, operating in scanlines is usually the least performant way of doing things, hence whey we have texture compression formats (even lossless ones) which do block encodings. TIFF has no reason for existing in my opinion.
TIFF supports many useful features for processing very large images. Tiled compression, storing channels in contiguous hyperplanes or on a single image plane, multiple levels of detail, custom compression codecs (lossless or lossy), sparse images, arbitrary bit widths, and permits storing arbitrary metadata with the data.
Viewers of libtiff often treat it like PNG/JPEG, but good Tiff viewers can leverage this functionality. Typically few batteries are included.
Tiff is kind of a cross breed between structured portable formats (HDF5/NetCDF) and imagery.
I don't doubt that you can have a library that provides all those features. All those features can exist for any lossless image format. My point is that as far as comparing image formats goes, the capabilities of the frontend library isn't a useful metric for evaluating the format itself which, in my opinion, can optimize for the following traits:
1. On-disk size
2. Compression/decompression speed
3. Access speed (for use in image analysis algorithms, editing, compositing, etc)
4. GPU friendliness (which operates in warps on small blocks of contiguous pixels in image space)
TIFF doesn't really optimize for any of these (not even #2 since in-memory decompression can be faster than paging uncompressed data from disk)
The big problem with highly tuned formats is that they make a lot of choices for the user. This works well when the format is used exactly as designed, but that is often not the case. With a tif I can pick exactly the compression, bit depth, color mode, tiling etc. that I need for the specific data.
TIFF can operate with scanlines, strips or tiles. Big images, such as 200k x 200k digital pathology slide scans, are stored in TIFF as e.g. 512x512 tiles, indivdually compressed and transparently accessible to the viewer. It is perfectly capable of dealing with images of vast sizes.
I don't know of any other application-independent image format with good support for layers. Seriously. (Even Tiff doesn't really have good support for layers.)
.PSD is tightly tied to Photoshop's internals, and .xcf is tightly tied to Gimp's internals. (The Gimp core developers explicitly recommend against using .xcf as an interchange format, even though the format has a fairly complete publicly available spec.) I've seen .gif images with multiple frames used as images with layers, but that forces every layer to have the same resolution, and of course limits you to an 8-bit color depth.
Supposedly the Gimp and Krita devs are collaborating on a new interchange format that will support things like layers, but I haven't heard any news about that in years.
It's doing fine. It's the native format for MyPaint, Krita supports it, Scribus supports it. Gimp's support is outdated, but that's because Gimp gets releases so rarely.
It was kind of a rhetorical question. I looked it up after I sent the comment to confirm what you just said. I was answering to the last paragraph of the parent comment.
TIFF is still used in the Apple ecosystems. It got a mini-rebirth recently when Apple started doing retina displays. TIFF has the ability to store multiple images, which Apple has latched onto to allow different images to be used for different resolution screens.
Application support is pretty huge, though: that's what effectively turned JPEG 2000 into a niche format since the standard was patent and license encumbered for the first decade and change and the vendors didn't make interoperability a priority because they assumed the technical merits would force everyone to adopt it in the end. When you're talking about lossless, however, it's often in the context of archival storage and people get spooked after they encounter files which they have trouble opening with perfect fidelity.
It's a shame since the compression technology was impressive and j2k would also have made a great progressive image format had browsers supported it.
TIFF has selectable (and pluggable) compression. While it's standard to use deflate compression, you can also use jpeg2000 or any other algorithm of your choice if you want lossy compression.
No matter how old or outdated a technology is, there will always be someone still using it somewhere.
I work for a company that makes workers' compensation software. I don't know of anything in our product that actually generates new TIFFs, but we've got plenty of old ones bumping around that were either uploaded by users or imported from other systems. I end up interacting with TIFFs one way or another every other month or so.
TIFF was used (probably still is) to store data from gas detectors. It wasn't called TIFF, but it was based on the format plus custom tags to store extra data and image layers. As a format it is super-flexible.
For important projects like this (libjpeg, libpng and others) it would make sense if there was sort of place to get all of them apart from mirrors.
What am I saying, that place[0] exists. Developers shouldn't have to shoulder the burden of hosting their code if they don't want to or don't wish to weather the expense. It certainly seems in this case they couldn't pay for self-hosting (or friend-hosting or whatever this is). I do suppose it is difficult to move development to a new system though(CVS to git).
I'm no Stallman, but I'm deeply uncomfortable with how readily everyone is accepting (and encouraging) GitHub as the complete overlord of open-source software. As already pointed out, Sourceforge is a cautionary tale of how these services can go very wrong due to business issues. There's also a much larger philosophical issue with basing the open-source economy on a proprietary platform with no particular intent of open-sourcing its core software.
Yes, GH does a lot of things right, including generally making it easy to export data form their apps in a reasonably vendor-neutral form. But what happens when GH runs into financial trouble like SF did? Do we want just about every project out there to have to struggle to do something with their GitHub issues?
There are issues with having open-source projects maintain their own infrastructure, but I think it's the right thing to do wherever possible. It makes them truly independent in a way that a GitHub repo can never be.
You might lose a lot of metadata around it. The comment to which you're responding mentions issues, which along with pull requests would be a particular area of concern.
It ameliorates the problem by making sure you have your repo and history, yes; but, you do lose the means of distribution (host) which is a loss and I think is really what happened here with libtiff.
The question is, can we trust an entity like that to remain trustworthy through decades? Turns out SF.net wasn't trustworthy through all of that time, even if they were in the beginning, and seem to be sincerely trying to be trustworthy again. So, if we put all of our eggs in one basket, we better be really confident of that basket. We could have lost every OSS project website, rather than just one, if an entity that we trust today becomes untrustworthy tomorrow.
GitHub will rot some day as well. (I think we're already seeing it start to decline.)
This is simply a permanent problem. Archive.org and to a lesser extent IPFS are viable solutions for archiving, but _contingency_ plans are I think the missing component here. Pray for the best but prepare for the worst.
You know, come to think of it, Archive.org is sorta kinda the right place to store these things. If they offered source code control they would both get the source as well as the path to the latest version all at once, which could be an invaluable historic record.
A git HTTP archive can be crawled and archived in principle. They might well be doing this already with github and elsewhere just through the normal course of their operation.
I don't think we're seeing a Github decline. Yes they've had some internal strife not all that connected to business, but Google just moved a lot of its OSS there, Microsoft just moved a bunch of stuff there. Git repos take a single command line statement to move, but Github is only slowing if by the force of not having much left to hoover up.
Gitlab et al are growing, and now AWS has its own integrated solution, but I don't see Github going anywhere for a very, very long time. (Barring catastrophic happenings, of course.)
Presumably the fact that they take down repositories that disagree with their politics (or the politics of other governments). They also don't appear to put up any fight against DMCA's, and they mandate that all users must run proprietary JavaScript. LibreJS makes the site mostly work though, so it's not that bad. The biggest problem is their policies, which are not pro-free-software (no matter what they might say).
That place should not be a company which is known for not fighting against DMCA requests, and requires that users run proprietary JavaScript. The best place would be the Internet Archive, hopefully running an FSF approved Git front-end (GitLab or GNU Savannah -- Gogs is probably also fine but they haven't reviewed it). Nothing else is acceptable if you actually want to make sure that code will truly outlive us.