What is the point of using socket.io today? Three years ago WebSocket support wasn't universally available, so you needed XHR etc. as a fallback. But now it's 2015 and everything supports WebSocket. Why not just use them directly? They have a super simple API. (see: http://ajf.me/websocket - a page I made about them)
If you're worried about firewall/proxy traversal, use wss:// (WebSocket over TLS), which not only traverses, but doesn't cause a speed and latency downgrade like falling back to XHR does.
I realise some sites might need to use socket.io if they're targeting legacy platforms, but that's an ever-shrinking slice of the market. For most sites it is probably overkill.
On top of its transport layer abstraction, socket.io adds various useful features including events, rooms, namespacing, efficient/convenient (de)serialization of both JSON and binary data, and over-the-network callbacks. I've been using it in websockets-only mode for various projects (you can limit the transports used), because I don't care for slower / less reliable transports but still want the extra features.
EDIT: If you'll allow the plug, an MMO trivia game show I made using socket.io (and three.js): http://masterofthegrid.net/
Socket.io is wayyyyy more noob-friendly. If you send json over the wire, socket.io automatically converts it to a javascript object or array for you, ready to be used. The best feature however is that you can send emit any string as an event :
socket.emit("hello"); and the server can listen on this event with socket.on("hello"). If you want to do the same thing with the reference WebSocket you have to re-implement the same thing again and you will probably end up recoding the wrapper that socket.io already gives you.
Sure I'm glossing over a few things but EventEmitter is 10-20 lines depending on how crazy you want to get and wrapping both in a objects is probably another 10-20 lines
socket.io on the other hand is a C node.js plugin and 61000 lines of JS code :(
Socket.IO also gives you things like cross-everything callbacks:
// server
io.emit('event', function (msg) { console.log('Client responded!', msg) })
// client
io.on('event', function (cb) { cb('This is my response') })
And it also provides socket namespacing, and "rooms" (a bit like chat rooms), and broadcasts, and a bunch of things that I don't use.
Sure, you can bolt all of that on in probably a few hundred lines, but socket.io also doesn't just do sockets and events :)
(And it also does it in less than 61000 lines. You still need the "ws" module to do WebSocket stuff in Node, even if you don't use socket.io)
Socket.io also transparently handles sessioning and resuming of sessions (automatic reconnect), namespaces, filters, and a ton of other useful stuff for a moderately-complex application. If I'm just sending events between client and server (say, live-updating a common twitstream), regular ws is fine. But if I'm selecting and reflecting certain events to other connected clients by room, and don't want to worry about stuff like "oh hey, that guy needs to re-handshake because he lost network", socket.io provides a lot of useful sugar.
I don't normally like bashing open-source projects, but socket.io should not be used. It may be noob-friendly, but that's just because it does things so automatically that you can't really use it correctly. When I was a websockets noob, I used socket.io briefly, and it was a complete waste of time.
Agreed - I've been using SockJS for a while and haven't had any issues. It's got a super-simple API. (intended to be as close to the WebSocket API as possible)
If your language of choice has a SockJS implementation, I'd recommend it as a first option.
This is trivial to do yourself. JSON.stringify(), JSON.parse(), switch() {}. It's not really some massive benefit of using the library. And this way there's no magic.
Also, magically trying to decode strings if they look like JSON sounds rather scary.
> Also, magically trying to decode strings if they look like JSON sounds rather scary.
That's not what's going on. Each message is tagged with a "type", and one of the tags means "JSON.parse this into an object" while another means "this is just a string".
The rest of the types are for control messages (like ping/pong), I believe.
- Plenty of consumer grade firewalls and routers will mess up socket connections
- E-commerce sites can't afford to drop customers with old browsers (what's old anyway, a browser that is a few years old should be supported regardless)
- Some corporate networks won't support anything other than plain HTTP over port 80
And so on. 'Legacy' for you is everyday business for someone else.
I remember when Microsoft tried to pass IE9 as a "modern browser" - with less than half the HTML5 features of Chrome and Firefox at the time (which is still true for the latest IE versions btw).
IE11 is fairly modern. You can't berate Microsoft for trying to catch up.
We don't want to be in a position where there can't be new browser engines because they're not "modern" if they don't implement literally every feature their competitors support.
IE11 was fairly modern when it came out. It will likely be legacy long before IE12 or whatever they decide to call it comes out. They're not really going to be able to play keep up until they divorce the browser from the OS. Given that this is Microsoft it is kind of funny that this is their problem now.
> IE11 is an evergreen browser with constant improvement.
Until Spartan comes out?
> There won't be another IE. There will be Spartan.
So we're going to be stuck supporting IE11 long after "Spartan" has taken hold just like every previous release? Doesn't that go against the whole purpose of the "Evergreen" browser?
> They basically have, long ago. IE10 and IE11 aren't Windows 8-exclusive.
I don't mean exclusive to the operating system. I mean that there are parts of the OS that rely on the browser libraries itself. And there are weird situations where OS things rely on IE libraries, or (Worse) Office Libraries. That at least for a while was one of the big reasons the problems would never get fixed.
Those companies are part of the problem, not the solution.
FWIW, we're driving our users to deploy at least IE10 before talking to us, preferably IE11. Supporting old browsers is just not worth it--for devs in the short-term, or clients in the long-term.
Supporting old browsers is just not worth it--for devs in the short-term, or clients in the long-term.
The part you missed is that it is worth it for clients in the short term. If you have an eCommerce site and a portion of your potential customers use IE8, then you cater to IE8. It's as simple as that. When your business depends on making money every month, "short term" or "long term" doesn't matter as much.
Countries still block websockets from other countries. We don't use pure websockets because AJAX polling still works even when firewalls and other hardware stop websockets.
You shouldn't have any problems with firewalls so long as you use TLS. And if you don't use TLS and rely on XHR fallback, then you're suffering a downgrade in speed and latency.
We tried switching from SockJS to pure websockets and had to revert because it broke too many users, even though our website only supports WebGL (IE11, etc) and all communication is done over https/wss. One of the biggest problems was plugins like the zenmate privacy plugin https://chrome.google.com/webstore/detail/zenmate-security-p...
> Even over TLS? Are your connections being MITMed?
Some corporate networks do MITM you and mandate HTTP only (we've seen this surprisingly often in the wild).
Now, for most apps this isn't a concern since those employees probably wouldn't be allowed to use the app, but it can be quite important for apps that specifically target large businesses.
We implemented our own client using libwebsockets (https://libwebsockets.org/trac/libwebsockets). Our main rationale, was that we use the 95% of the same codebase for iOS/Android/Win/OSX/Linux.
I got curious about this library, but the boost dependency is an immediate "oh hell no". Even more so now that we have good C++11 features across the board and so the boost dependency is something we avoid like the plague (we used to have it and C++11 support was a god-sent that enabled us to get rid of boost).
> but the boost dependency is an immediate "oh hell no".
So forgive me because I'm pretty new to C++ (well I knew it semi decently 12 years ago; trying to relearn now) but most of the C++ developers I know love boost especially for asio since most tcp libs seem to be very old. What is the reason it makes you go "oh hell no"?
If you don't care about speed, Berkeley sockets (bind, listen, accept, etc.) are just fine and very easy to deal with.
If you do care about speed, you should be banging against the operating system's provided tools anyways (IOCP on Windows, kqueue on BSD, epoll on Linux, etc.).
If it's abstraction you care about, you shouldn't be doing networking with raw TCP anyways, and should just use zmq, nanomsg, or whatever, and not drag in the entire clown car of boost.
This weekend I tried to bring in boost format. I tried to bring in JUST format, but nope... exceptions, and config, and a bunch of other stuff... Fifteen libraries, maybe?
10 years ago for me it was because it was really hard to get it to compile. I might get it setup on my cygwin install or my linux but have trouble compiling it on both. I was able to get mozilla compiling faster/easier then boost... It's probably improved a lot since then, but back then if you had asked me to use boost I'd be rather upset as well... I think now days at least on linux there are packages for boost so it's not bad at all and also homebrew for mac so that's exciting.
`but the boost dependency is an immediate "oh hell no"`
If this were a C++11 and standalone ASIO dependency would that change your mind? If it is a specific issue with ASIO, what sort of network transport would you want to see used for something like this (there is no network library in C++11)?
Boost makes you enter header and linking hell. I'm talking "an entire day to figure out how to include that in your makefile and why the fuck doesn't it link correctly" level hell.
You can also count on having 10MB+ of libraries to distribute if you use the entire boost lib (which, let's be honest, will be the case 90% of the time because people can't bother with cherrypicking features)
And just pray that you don't need to recompile boost because you're in for a fun few hours of wasted compilation because it fails at 80% because of <cryptic boost message>
This is what I find most surprising about C++. Apparently including a single library as a dependency takes large amounts of effort. The end result seems to be that developers avoid using dependencies and resort to rewriting large amounts of code.
If you contrast this with, say, Node.js, you could simply do "npm install boost --save-dev" and "var boost = require('boost');".
Nobody seems to use C++ package managers either, and the typical reply is to use the package manager of the operating system. But since that's not platform neutral there's a lot of wasted effort of maintaining multiple instructions to install the library. And Windows instructions typically require non-trivial amount of manual work, often even setting up weird environment variables pointing to various locations.
And then you want to build for 32bit instead of 64b and there's more manual work.
I would have imagined that this would be a solved problem by now.
Hmm, granted I'm very new to boost and to C++ in general because I'm relearning it but a couple of nights ago I grabbed boost, compiled from source and tried out an asio example and it all seemed to work pretty easily. Maybe I need to get into more complex projects to see how bad it gets (which is worry-some since I'm essentially a newbie and don't want to waste my time learning something that won't be useful later (or just painful)).
> And just pray that you don't need to recompile boost because you're in for a fun few hours of wasted compilation because it fails at 80% because of <cryptic boost message>
It took my lowest spec MacBook Air about 15-20 minutes to compile from source. Does it normally take hours for you?
Of course, boost is not as hard as some grumpy oldtimers make you believe. Anyone who refuses to use the massive amounts of tested, reliable code that is boost because 'it's too hard to install', should be banned from any real-world project for severe NIH syndrome.
I wish I was a grumpy oldtimer, but I'm more of a dirty youngun'.
Note that I didn't say "don't use boost". Boost is amazing and you should use it if it saves you time. It's just that initial setup may be... let's say interesting.
They were referring to compile times to build boost itself, not of their own projects. On most Linux distros, there are binaries already available so that's not a problem; yes with brew on OSX it takes a long time but you don't do it that often.
Of course template-heavy, header-only libraries will increase compile time, but not insanely so. I try to be careful about including boost's ease-of-use headers which pull in everything and keep it limited to what I really need from a given sub-project.
Most of what I use from boost is in C++11, except asio, but the same principals apply.
One thing to keep in mind is that there are C++ devs who like Boost and there are C++ devs who hate Boost with passion. There are also said to be devs who are merely OK with this abomination but they are more of an urban legend really.
In other words, consider the fact that by throwing Boost in the mix, you rob yourself a good half of the intended audience, if not more.
I guess I belong to the "urban legend" category then.
I never introduced Boost into my projects, but when one of the co-developers on an open source project where I am the lead (FlameRobin.org) said it would be useful, and explained what problems it would solve, I just said "ok". IIRC, this was circa 2007.
All those complaining about compile times should learn about PCH (pre-compiled headers). It's fairly simple and makes sure headers are compiled only once (unless they change). AFAIK, all modern compilers support PCH (GCC, MSVC, even Borland's).
Fixing a build can be pretty torturous though. My main thought when I'm looking at using a library in C++ is "oh god I hope this isn't a pain to build"
It has nothing to do with the integration difficulties.
Many ++ devs, me included, come from the C background and the amount of obfuscation Boost adds to the code is simply not worth any benefits it brings along. Yes, you can do a lot with a single line of Boost'ed code, but when it maps onto pages of simpler code, you basically lose a lot of control over the code. It also bloats the binary, which again may not even register as a viable concern with some, but which does have several long-reaching complications.
That's fair. C devs are also a little more used to being tied to specific OS facilities such kqueue/select. For me boost shines as an ASIO implementation. When that becomes stdlib, I'm not sure I would be compelled to use it.
This is along the lines of the traditional abstraction level debates, rather than anything particularly wrong with boost. I think it gets singled out unfairly as being a particularly difficult 3rd party library.
Why do you think it's only "a day." Also, you are forever introducing a dependency in your build chain, codebase, and executable. EVERY 3rd party dependency should be audited as such (performance, security, usability, etc). Boost is no exception, but for the amount of things it comes with (often), and the strain on the compiler requirements it produces, it is typically included as a last resort.
I should mention, people who talk without much context are a curious bunch too.
Yes there is always a balance of concerns. Build management with 3rd party deps is not trivial but its part of the lifecycle. In my experience boost integration really is "a day" on windows (including stashing the .libs as their own repo and reproducing a fresh 'git clone' build), and 10 minutes on a platform that you can use the package manager to obtain it. You can always excise it from a project later if you dont abuse 'using' statements. FWIW, I only really use variant and ASIO.
If you were adding boost to a project just to do some weird spirit/preprocessor hacks, I'd be wary. But if you told me you were going to write yet another cross platform ASIO wrapper because you couldn't figure out how to compile boost I'd be even more concerned.
If you're not bound to a browser, what's the benefit of using WebSockets vs. something on top of normal sockets? Personally, I thought this was going to be a Socket.IO server in C++.
The benefit is you don't need a separate service for non-web clients.
If you have something TCP-based, it's usable only by non-web clients. If you have something WebSocket-based, it's usable everywhere.
Edit: An additional consideration: WebSocket is a nice, message-based protocol with easy-to-use APIs everywhere, unlike the stream-based TCP whose usual API (Berkeley sockets) is rather more difficult to use.
I agree that socket.io offers little over WebSocket except for backwards-compatibility, but the OP's question was more about why you'd want to use WebSocket.
I had the same gut feeling. TCP already seems like a fine enough abstraction (especially in comparison to UDP), and it is actually literally _everywhere_.
Maybe someone should write a TCP socket implementation for the browser instead of trying to re-implement the transport layer on top of the application layer.
Ah, that makes sense. And I think I might have found an answer to my own question:
"By virtue of being written in C++, this client works in several different platforms. The examples folder contains an iPhone, QT and Console example chat client!" - From the github repo
Right now I am using IPWorks WS client for C# and the simple 'ws' nodejs module, they are talking to each other just fine. Doesn't have the same capabilities as socket.io though (we have to handle reconnections a bit differently, no concept of channels, etc). Would SignalR talk to a nodejs socket.io server with these features?
GRPC.io - thats where the next generation of efficient multiplexed streams will come from. Lack of a client side lib and http/2 adoption means it's still a ways out.
That's a bold claim. RPC mechanisms is one of those spaces - like IRC clients, text editors and parser generators - that there seems to be more implementations of than there are extant users.
Anybody else notice that Automattic (the wordpress company) now owns the github account for Socket.io? (https://github.com/Automattic/socket.io/) As well as mongoose, kue, expect.js and others... when and why did this happen?
LearnBoost/Guillermo made Socket.io; they had a startup under the same umbrella called CloudUp; Automattic acquired CloudUp/LearnBoost/Socket.io (2 years ago I think).
If you're worried about firewall/proxy traversal, use wss:// (WebSocket over TLS), which not only traverses, but doesn't cause a speed and latency downgrade like falling back to XHR does.
I realise some sites might need to use socket.io if they're targeting legacy platforms, but that's an ever-shrinking slice of the market. For most sites it is probably overkill.