Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For long books or books containing large images or videos, that giant HTML file would be so gigantic that many e-readers would be slow at loading them or even fall over (the average e-reader doesn’t have a fast CPU or loads of memory).

I also expect data URIs for images would make the HTML file, even if gzipped, larger than the equivalent ePub.

HTML also isn’t good at doing the book-like things such as pagination, tables of content and on-page footnotes.



Videos? I guess for some applications. Either way, the giant HTML file is nasty, yes, but the epub would be the same size [1]. It would be harder to process, though, because even XHTML has a nesting structure that doesn't lend itself to decomposition easily.

For the most part ebooks are reflowable, so pagination only matters in the sense of manual page breaks, and HTML has to be extended to handle that even for epub. Similarly on-page footnotes (<aside> in iBook epub and <a href><sup> for Kindle) need to be handled now in HTML extensions.

Tables of contents and indexing content is a little harder, but if the format were well-defined (something like <chapter> tags, or just using <h1> as semantic tags) then it would be easy to generate as well. Or it could just be done using a bunch of <a href> in an explicit TOC -- easy enough for a compiler to handle.

Metadata is in theory more difficult, for things like author, etc., but defining tags in the <head> of the document would be just as easy as the manifest definitions that readers have to deal with now.

[1] gzipping random data base64 encoded is ~3% increase in file size.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: