It's most of the large articles, the entirety (stripped of citations and image metadata) and compressed is more like 4GB. For comparison, everything (including citations) as a wikimedia XML dump is about 7GB bzipped and thirty something uncompressed
from the blog:
>>> First of all, it compresses not the entireity, but rather the most popular subset of the English Wikipedia. Two dumps are distributed at time of writing, the top 1000 articles and the top 300,000 requiring approximately 10MB and 1GB, respectively.
Seriously mis-titled, since it's nowhere even close to the "Entire Wikipedia" – it's a tiny subset of the English-language Wikipedia from what I can tell.
Nice job, this looks really useful - would certainly help for the times when I'm stuck with no internet access and need to look something up.
One minor niggle - when I changed the file I wanted to use in settings, there was no confirmation or notification to let me know it was downloading the new file. I ended up stopping the download, erasing the data and starting again, to be sure. It might be worth adding in a confirmation to let users know it was changed OK, and is being re-downloaded.
Selenium's webdriver is great for this, they have different implementations so you can use it like wget (but more sophisticated) where it doesn't run an actual browser or you can have it run an implementation where it actually runs a browser, like Chrome or Firefox (good for debugging)
Wikipedia does not want to have branching, to force people to work together, rather than go off working on various rewrites. Each article is version controlled separately. The system does not handle tracking across merges and splits of articles, but many VCS have the same problem.
This is cool, but the one thing I miss from all wikipedia dumps so far is images. It's essential for a lot of articles. Last time I checked, images were excluded from dumps because of license issues. "Fair use" in particular. How about a dump of just the images with fitting licenses? Does anyone here know why this is not available?
People don't understand licences. There are many images with incorrect licences. (There are bots that trawl the images to ask people to correct the licences; there have been megabyte long flamewars about the operators of those bots and how unpopular image tagging is.)
There are just too many infringing images, even those supposedly with the correct licence, for Wiki* to distribute and stay safe.
Random article results in 404 once in may be 4 times. Here is a suggestion for an improvement - a link for making 404 pages available offline. So if I go looking for a specific page that isn't offline I can make it available and read it later.
I use the Wikireader (from OpenMoko) when traveling:
http://en.wikipedia.org/wiki/Wikireader
I find it very useful, especially on the longer wall-socketless cycling trips.
You can stick both wikipedia and wiktionary on it. Quite possibly also Wikitravel, if they provide dumps.
It says it was tested in Firefox 10, which is a little surprising since it doesn't work at all in Firefox 10. The IndexedDB spec changed and Firefox changed to align with the spec between 9 and 10, but the page uses the old API.
I tested it on an infrequently updated installation of firefox nightly, and the about page said Firefox 10. But I didn't know the API changed, I'll look into it. But how so did it change?
var request = mozIndexedDB.open("databasename");
request.onsuccess = function(event) {
request = event.target.result.setVersion(N);
request.onsuccess = function(event) {
// set up your database
}
}
it looks like
var request = mozIndexedDB.open("databasename", N);
request.onupgradeneeded = function(event) {
// set up your database
}
request.onsuccess = function(event) {
// do stuff with your database
}
Feel free to email me at <my hacker news username>@mozilla.com if you need a more detailed explanation.
Does this app grab the files from Wikipedia directly? It doesn't seem very nice to create an app that pulls down gigabytes of data from a web service you do not own nor have permission from.
WebRTC means peer-to-peer is probably coming to Chrome and Firefox soon, which will allow an app like this to transfer Wikipedia in all its 7.3GB (compressed) glory without harm to anyone's servers.
Thanks so much for this. This will be incredibly useful for me (behind the GFW, which gets moody about Wikipedia pretty often). Could this easily periodically update itself to grab fresh versions of articles? I think that would be a great feature, especially if you could do it without having to pull down the whole database each time you wanted to update, instead just updating on an article-by-article basis.
Absolutely amazing. This technology can be used for many other offline databases. He provides the tools for indexing, compressing and everything needed for the reader. Make sure to read his corresponding blog post: http://antimatter15.com/wp/2011/12/offline-wiki-redux/
It only sort of works on iOS 5, the downloads stop whenever a "Increase Storage" prompt pops up and you have to reload whenever that happens. But it does work with the small dump, albeit slowly.
Cool, I initially didn't think this much storage was possible on mobile yet. Are you saying you can get the whole thing down if you keep agreeing to the prompts?
It's a pity mobile browsers haven't got better support for this kind of thing yet.
No I think it stops issuing prompts after 50GB. Also, on iOS 5, it only supports WebSQL which dosn't store (AFAIK) objects like typed arrays, so I have to convert it to a string and back base64 encoded, which makes it use even more space.
I find that hard to believe. Other wiki readers' dumps are a multiple of that. Eg aarddict for en is ~8GB.