Why we still use HTTP is beyond me. And I don't mean about the speed issues. Why have a protocol that's so complicated when most of the things we need to build with it are either simpler or reimplement parts of the protocol.
Could you elaborate your issues with HTTP a bit? What kind of protocol would do a better job?
Minimal implementations of HTTP (and I'm strictly talking about the transport protocol, not about HTML, JS, ...) is dead simple and relatively easy to implement.
Of course there's a ton of extensions (gzip compression, keepalive, chunks, websockets, ...), but if you simply need to 'add HTTP' to one of your projects (and for some reason none of the existing libraries can be used) it shouldn't take too many lines of code until you can serve a simple 'hello world' site.
On top of all that, it's dead simple to put any one of the many existing reverse proxies/load balancers in front of your custom HTTP server to add load balancing, authentication, rate limiting (and all of those can be done in a standard way)
Furthermore, HTTP has the huge advantage of being readily available on pretty much every piece of hardware that has even the slightest idea of networking.
Any new technology would have to fight a steep uphill battle to convince existing users to switch.
I do see a lot of application protocols tunnelled over HTTP that have no sane reason to be. Partly to work around terrible firewalls/routers/etc. - but of course the willingness to work around those perpetuates their existence. E.g. one reason for the rise of Skype was so many crappy routers that couldn't handle SIP.
My friend once mentioned that FTP would be a good option, I'm not sure why though. I think they regarded HTTP as superfluous for the purpose of what we use the web for.
It's not just firewalls. The fact that (unencrypted) FTP is still widely used today when better alternatives like SFTP (via SSH) have existed for years strikes me as odd.
(I'm speaking about authenticated connections. For anonymous access - which should be read-only anyway - you're usually better off using HTTP anyway)
I once had to provide an FTP-like interface to user directories for the website's users. Couldn't find an easy way to do it with SFTP without creating Linux users. Found an FTPS daemon that would let me call an external script for auth and set a root directory, which made it trivial (once I deciphered its slightly-cryptic docs).
So in that case, at least, I was very glad FTP(S) was still around.
HTTP tends to be faster for what we use the web for: [0]
FTP does have some advantages, but HTTP has more advanced support for resuming connections, virtual hosting, better compression, and persistent connections, to name a few.
I bet if we had used FTP instead of HTTP for serving HTML right from the start, FTP would today have all of the same extensions and the same people would argue for it being too bloated :) (HTTP started as pretty minimalistic protocol back in the day)
I often find the discrepancy between what HTTP has originally been designed for (serving static HTML pages) and all the different things it's being used for today highly amusing. Yes, some of todays applications for HTTP border on abuse, but its versatility (combined with its simplicity) fascinates me.
No, because FTP is stateful, thus it sould not have scaled well to many HTTP usecases of today, and something alse would have been born probably, to solve the problems with statelessness, which can be solved by statelessness.
The two success factors of http are statelessness and fixed verbs.
HTTP is quite a good protocol. Simple, extensible to a sane extent, but not overly extensible (XMPP i'm thinking about you).
HTTP is not accidentally successful. FTP is a bad joke. (stateful. binary mode, 7 bit by default. uses multiple connections (unless in passive mode))
Basic http is dead simple, it works, and it also has many addons with backward compatibility (one can still use a basic http client or server in most cases) and even new version fully optimized to nowadays needs (and even in binary form)
A bunch of newline, (CRLF), seperated key-value mappings. Some with a DSL (Such as Set-Cookie).
It gives you a status message instantly, a date to check against cache, a Content-Type for your parser, acceptable encoding, for your parser, a bunch of other values for your cache. All for free.
As for the body of the content? For a gzipped value like this, it's everything outside the header, until EOF. That's not quite as easy as when the content-length parameter is given, but hardly difficult for parsing.
HTTP is easy.
In fact, HTTP is so easy, that in-complete HTTP servers can still serve up real content, and browsers can still read it.
HTTPS is more complicated, but if you simply rely on certificate stores and CAs, it becomes much easier, but HTTPS is a different protocol.
> As for the body of the content? For a gzipped value like this, it's everything outside the header, until EOF. That's not quite as easy as when the content-length parameter is given, but hardly difficult for parsing.
This is chunked and keep-
alive. Things get a little trickier
True, you keep the connection open, and receive a length of expected bytes, and then said bytes, until 0 is sent. Still simple enough that there are a dozen implementations of less than a page, only a search away.