Hacker News new | past | comments | ask | show | jobs | submit login
Socket.IO C++ (socket.io)
253 points by Rauchg on April 13, 2015 | hide | past | favorite | 78 comments



What is the point of using socket.io today? Three years ago WebSocket support wasn't universally available, so you needed XHR etc. as a fallback. But now it's 2015 and everything supports WebSocket. Why not just use them directly? They have a super simple API. (see: http://ajf.me/websocket - a page I made about them)

If you're worried about firewall/proxy traversal, use wss:// (WebSocket over TLS), which not only traverses, but doesn't cause a speed and latency downgrade like falling back to XHR does.

I realise some sites might need to use socket.io if they're targeting legacy platforms, but that's an ever-shrinking slice of the market. For most sites it is probably overkill.


Quoting myself the last time this question was asked (https://news.ycombinator.com/item?id=8917505):

On top of its transport layer abstraction, socket.io adds various useful features including events, rooms, namespacing, efficient/convenient (de)serialization of both JSON and binary data, and over-the-network callbacks. I've been using it in websockets-only mode for various projects (you can limit the transports used), because I don't care for slower / less reliable transports but still want the extra features.

EDIT: If you'll allow the plug, an MMO trivia game show I made using socket.io (and three.js): http://masterofthegrid.net/


Socket.io is wayyyyy more noob-friendly. If you send json over the wire, socket.io automatically converts it to a javascript object or array for you, ready to be used. The best feature however is that you can send emit any string as an event :

socket.emit("hello"); and the server can listen on this event with socket.on("hello"). If you want to do the same thing with the reference WebSocket you have to re-implement the same thing again and you will probably end up recoding the wrapper that socket.io already gives you.


> If you send json over the wire, socket.io automatically converts it to a javascript object or array for you, ready to be used.

That's literally 4 lines

    // sender
    ws.send({event: JSON.stringify(someObject));

    // receiver
    ws.on('message', function(str) {
       obj = JSON.parse(str);
    });
> socket.emit("hello"); and the server can listen on this event with socket.on("hello").

This is hardly any more lines of code

    // sender
    function sendEvent(eventName, data) {
      ws.send(JSON.stringify({event: eventName, data: data}));
    }

    // receiver
    ws.on('message', function(str) {
      var eventInfo = JSON.parse(str);
      eventEmittier.emit(eventInfo.event, eventInfo.data);
    });

Sure I'm glossing over a few things but EventEmitter is 10-20 lines depending on how crazy you want to get and wrapping both in a objects is probably another 10-20 lines

socket.io on the other hand is a C node.js plugin and 61000 lines of JS code :(


Socket.IO also gives you things like cross-everything callbacks:

     // server
     io.emit('event', function (msg) { console.log('Client responded!', msg) })

     // client
     io.on('event', function (cb) { cb('This is my response') })
And it also provides socket namespacing, and "rooms" (a bit like chat rooms), and broadcasts, and a bunch of things that I don't use.

Sure, you can bolt all of that on in probably a few hundred lines, but socket.io also doesn't just do sockets and events :)

(And it also does it in less than 61000 lines. You still need the "ws" module to do WebSocket stuff in Node, even if you don't use socket.io)


Socket.io also transparently handles sessioning and resuming of sessions (automatic reconnect), namespaces, filters, and a ton of other useful stuff for a moderately-complex application. If I'm just sending events between client and server (say, live-updating a common twitstream), regular ws is fine. But if I'm selecting and reflecting certain events to other connected clients by room, and don't want to worry about stuff like "oh hey, that guy needs to re-handshake because he lost network", socket.io provides a lot of useful sugar.


You don't even need an event emitter. In many cases you might just use switch(){}.


I don't normally like bashing open-source projects, but socket.io should not be used. It may be noob-friendly, but that's just because it does things so automatically that you can't really use it correctly. When I was a websockets noob, I used socket.io briefly, and it was a complete waste of time.

See, for example, https://github.com/Automattic/socket.io-client/issues/572 (closed without comment).

If you want a nice websockets library that handles old browsers, use SockJS. It's far better.

[edit: fixed a silly typo]


Agreed - I've been using SockJS for a while and haven't had any issues. It's got a super-simple API. (intended to be as close to the WebSocket API as possible)

If your language of choice has a SockJS implementation, I'd recommend it as a first option.


This is trivial to do yourself. JSON.stringify(), JSON.parse(), switch() {}. It's not really some massive benefit of using the library. And this way there's no magic.

Also, magically trying to decode strings if they look like JSON sounds rather scary.


> Also, magically trying to decode strings if they look like JSON sounds rather scary.

That's not what's going on. Each message is tagged with a "type", and one of the tags means "JSON.parse this into an object" while another means "this is just a string".

The rest of the types are for control messages (like ping/pong), I believe.


- Plenty of consumer grade firewalls and routers will mess up socket connections

- E-commerce sites can't afford to drop customers with old browsers (what's old anyway, a browser that is a few years old should be supported regardless)

- Some corporate networks won't support anything other than plain HTTP over port 80

And so on. 'Legacy' for you is everyday business for someone else.


IE 8 and 9 don't support web sockets. For many companies, that's still a significant piece of the market.


Thus the mention of "legacy platforms". If IE9 and below matters then, yes, you should use it.


I remember when Microsoft tried to pass IE9 as a "modern browser" - with less than half the HTML5 features of Chrome and Firefox at the time (which is still true for the latest IE versions btw).


IE11 is fairly modern. You can't berate Microsoft for trying to catch up.

We don't want to be in a position where there can't be new browser engines because they're not "modern" if they don't implement literally every feature their competitors support.


IE11 was fairly modern when it came out. It will likely be legacy long before IE12 or whatever they decide to call it comes out. They're not really going to be able to play keep up until they divorce the browser from the OS. Given that this is Microsoft it is kind of funny that this is their problem now.


> IE11 was fairly modern when it came out.

Still is. IE11 is an evergreen browser with constant improvement.

> It will likely be legacy long before IE12 or whatever they decide to call it comes out.

There won't be another IE. There will be Spartan.

> They're not really going to be able to play keep up until they divorce the browser from the OS.

They basically have, long ago. IE10 and IE11 aren't Windows 8-exclusive.


> IE11 is an evergreen browser with constant improvement.

Until Spartan comes out?

> There won't be another IE. There will be Spartan.

So we're going to be stuck supporting IE11 long after "Spartan" has taken hold just like every previous release? Doesn't that go against the whole purpose of the "Evergreen" browser?

> They basically have, long ago. IE10 and IE11 aren't Windows 8-exclusive.

I don't mean exclusive to the operating system. I mean that there are parts of the OS that rely on the browser libraries itself. And there are weird situations where OS things rely on IE libraries, or (Worse) Office Libraries. That at least for a while was one of the big reasons the problems would never get fixed.


Sorta wish they had shipped IE9 with Windows XP instead of Windows 7 so companies would have more incentive to drop IE9 support.


Those companies are part of the problem, not the solution.

FWIW, we're driving our users to deploy at least IE10 before talking to us, preferably IE11. Supporting old browsers is just not worth it--for devs in the short-term, or clients in the long-term.

Save them from themselves.


Supporting old browsers is just not worth it--for devs in the short-term, or clients in the long-term.

The part you missed is that it is worth it for clients in the short term. If you have an eCommerce site and a portion of your potential customers use IE8, then you cater to IE8. It's as simple as that. When your business depends on making money every month, "short term" or "long term" doesn't matter as much.


Countries still block websockets from other countries. We don't use pure websockets because AJAX polling still works even when firewalls and other hardware stop websockets.


Even over TLS? Are your connections being MITMed?

You shouldn't have any problems with firewalls so long as you use TLS. And if you don't use TLS and rely on XHR fallback, then you're suffering a downgrade in speed and latency.


We tried switching from SockJS to pure websockets and had to revert because it broke too many users, even though our website only supports WebGL (IE11, etc) and all communication is done over https/wss. One of the biggest problems was plugins like the zenmate privacy plugin https://chrome.google.com/webstore/detail/zenmate-security-p...


How do these plugins cause issues with WebSocket? Are they stripping SSL from connections? That makes me extremely suspicious of their authors. :/


> Even over TLS? Are your connections being MITMed?

Some corporate networks do MITM you and mandate HTTP only (we've seen this surprisingly often in the wild).

Now, for most apps this isn't a concern since those employees probably wouldn't be allowed to use the app, but it can be quite important for apps that specifically target large businesses.


We implemented our own client using libwebsockets (https://libwebsockets.org/trac/libwebsockets). Our main rationale, was that we use the 95% of the same codebase for iOS/Android/Win/OSX/Linux.

I got curious about this library, but the boost dependency is an immediate "oh hell no". Even more so now that we have good C++11 features across the board and so the boost dependency is something we avoid like the plague (we used to have it and C++11 support was a god-sent that enabled us to get rid of boost).

Nice to see more C++ libraries here though.


> but the boost dependency is an immediate "oh hell no".

So forgive me because I'm pretty new to C++ (well I knew it semi decently 12 years ago; trying to relearn now) but most of the C++ developers I know love boost especially for asio since most tcp libs seem to be very old. What is the reason it makes you go "oh hell no"?


If you don't care about speed, Berkeley sockets (bind, listen, accept, etc.) are just fine and very easy to deal with.

If you do care about speed, you should be banging against the operating system's provided tools anyways (IOCP on Windows, kqueue on BSD, epoll on Linux, etc.).

If it's abstraction you care about, you shouldn't be doing networking with raw TCP anyways, and should just use zmq, nanomsg, or whatever, and not drag in the entire clown car of boost.


> not drag in the entire clown car of boost.

As far as I can tell you can just import what you want and not have to bring in the entire boost library to use asio. Is that not correct?


This weekend I tried to bring in boost format. I tried to bring in JUST format, but nope... exceptions, and config, and a bunch of other stuff... Fifteen libraries, maybe?


asio specifically is available in a standalone version that does not depend on any other boost libraries.


Boost's asio has backends (reactors) for IOCP, kqueue, etc. The whole point is you write to one interface and take advantage of their performance.


10 years ago for me it was because it was really hard to get it to compile. I might get it setup on my cygwin install or my linux but have trouble compiling it on both. I was able to get mozilla compiling faster/easier then boost... It's probably improved a lot since then, but back then if you had asked me to use boost I'd be rather upset as well... I think now days at least on linux there are packages for boost so it's not bad at all and also homebrew for mac so that's exciting.


`but the boost dependency is an immediate "oh hell no"`

If this were a C++11 and standalone ASIO dependency would that change your mind? If it is a specific issue with ASIO, what sort of network transport would you want to see used for something like this (there is no network library in C++11)?


Yes, it would change my mind if all the dependencies are easy to compile and work cross-platform (Android/iOS/OSX/Linux/Windows) without a problem.


> the boost dependency is an immediate "oh hell no"

Can you expand on that?


Boost makes you enter header and linking hell. I'm talking "an entire day to figure out how to include that in your makefile and why the fuck doesn't it link correctly" level hell.

You can also count on having 10MB+ of libraries to distribute if you use the entire boost lib (which, let's be honest, will be the case 90% of the time because people can't bother with cherrypicking features)

And just pray that you don't need to recompile boost because you're in for a fun few hours of wasted compilation because it fails at 80% because of <cryptic boost message>


This is what I find most surprising about C++. Apparently including a single library as a dependency takes large amounts of effort. The end result seems to be that developers avoid using dependencies and resort to rewriting large amounts of code.

If you contrast this with, say, Node.js, you could simply do "npm install boost --save-dev" and "var boost = require('boost');".

Nobody seems to use C++ package managers either, and the typical reply is to use the package manager of the operating system. But since that's not platform neutral there's a lot of wasted effort of maintaining multiple instructions to install the library. And Windows instructions typically require non-trivial amount of manual work, often even setting up weird environment variables pointing to various locations.

And then you want to build for 32bit instead of 64b and there's more manual work.

I would have imagined that this would be a solved problem by now.


Hmm, granted I'm very new to boost and to C++ in general because I'm relearning it but a couple of nights ago I grabbed boost, compiled from source and tried out an asio example and it all seemed to work pretty easily. Maybe I need to get into more complex projects to see how bad it gets (which is worry-some since I'm essentially a newbie and don't want to waste my time learning something that won't be useful later (or just painful)).

> And just pray that you don't need to recompile boost because you're in for a fun few hours of wasted compilation because it fails at 80% because of <cryptic boost message>

It took my lowest spec MacBook Air about 15-20 minutes to compile from source. Does it normally take hours for you?


Of course, boost is not as hard as some grumpy oldtimers make you believe. Anyone who refuses to use the massive amounts of tested, reliable code that is boost because 'it's too hard to install', should be banned from any real-world project for severe NIH syndrome.


>grumpy oldtimers

I wish I was a grumpy oldtimer, but I'm more of a dirty youngun'.

Note that I didn't say "don't use boost". Boost is amazing and you should use it if it saves you time. It's just that initial setup may be... let's say interesting.


This... Boost will ruin your compile times. I'm addicted to instant compilation so I avoid template hell.


They were referring to compile times to build boost itself, not of their own projects. On most Linux distros, there are binaries already available so that's not a problem; yes with brew on OSX it takes a long time but you don't do it that often.

Of course template-heavy, header-only libraries will increase compile time, but not insanely so. I try to be careful about including boost's ease-of-use headers which pull in everything and keep it limited to what I really need from a given sub-project.

Most of what I use from boost is in C++11, except asio, but the same principals apply.


Bingo.


One thing to keep in mind is that there are C++ devs who like Boost and there are C++ devs who hate Boost with passion. There are also said to be devs who are merely OK with this abomination but they are more of an urban legend really.

In other words, consider the fact that by throwing Boost in the mix, you rob yourself a good half of the intended audience, if not more.


I guess I belong to the "urban legend" category then.

I never introduced Boost into my projects, but when one of the co-developers on an open source project where I am the lead (FlameRobin.org) said it would be useful, and explained what problems it would solve, I just said "ok". IIRC, this was circa 2007.

All those complaining about compile times should learn about PCH (pre-compiled headers). It's fairly simple and makes sure headers are compiled only once (unless they change). AFAIK, all modern compilers support PCH (GCC, MSVC, even Borland's).


Devs who happily use C++ yet balk at spending a day to get a large third party library integrated into their build are a curious bunch.


C++ devs are a curious bunch :-)

Fixing a build can be pretty torturous though. My main thought when I'm looking at using a library in C++ is "oh god I hope this isn't a pain to build"


It has nothing to do with the integration difficulties.

Many ++ devs, me included, come from the C background and the amount of obfuscation Boost adds to the code is simply not worth any benefits it brings along. Yes, you can do a lot with a single line of Boost'ed code, but when it maps onto pages of simpler code, you basically lose a lot of control over the code. It also bloats the binary, which again may not even register as a viable concern with some, but which does have several long-reaching complications.


That's fair. C devs are also a little more used to being tied to specific OS facilities such kqueue/select. For me boost shines as an ASIO implementation. When that becomes stdlib, I'm not sure I would be compelled to use it.

This is along the lines of the traditional abstraction level debates, rather than anything particularly wrong with boost. I think it gets singled out unfairly as being a particularly difficult 3rd party library.


Why do you think it's only "a day." Also, you are forever introducing a dependency in your build chain, codebase, and executable. EVERY 3rd party dependency should be audited as such (performance, security, usability, etc). Boost is no exception, but for the amount of things it comes with (often), and the strain on the compiler requirements it produces, it is typically included as a last resort.

I should mention, people who talk without much context are a curious bunch too.


Yes there is always a balance of concerns. Build management with 3rd party deps is not trivial but its part of the lifecycle. In my experience boost integration really is "a day" on windows (including stashing the .libs as their own repo and reproducing a fresh 'git clone' build), and 10 minutes on a platform that you can use the package manager to obtain it. You can always excise it from a project later if you dont abuse 'using' statements. FWIW, I only really use variant and ASIO.

If you were adding boost to a project just to do some weird spirit/preprocessor hacks, I'd be wary. But if you told me you were going to write yet another cross platform ASIO wrapper because you couldn't figure out how to compile boost I'd be even more concerned.


I think the strawman here is that people are wary of adding boost because they are afraid of compiling it. Where did you get that idea from?


To me adding a C++ client to Socket.IO seems like a feature to increase the adoption of Socket.IO by embedded/IoT devices.


If you're not bound to a browser, what's the benefit of using WebSockets vs. something on top of normal sockets? Personally, I thought this was going to be a Socket.IO server in C++.


The benefit is you don't need a separate service for non-web clients.

If you have something TCP-based, it's usable only by non-web clients. If you have something WebSocket-based, it's usable everywhere.

Edit: An additional consideration: WebSocket is a nice, message-based protocol with easy-to-use APIs everywhere, unlike the stream-based TCP whose usual API (Berkeley sockets) is rather more difficult to use.


TCP is usable everywhere. Socket.io builds on top of websockets.

Socket.io is useful when you have legacy web browsers that may not do websockets. Beyond that, it does not bring much over regular websockets.


> TCP is usable everywhere.

Not on the web it isn't.

I agree that socket.io offers little over WebSocket except for backwards-compatibility, but the OP's question was more about why you'd want to use WebSocket.


I had the same gut feeling. TCP already seems like a fine enough abstraction (especially in comparison to UDP), and it is actually literally _everywhere_.

Maybe someone should write a TCP socket implementation for the browser instead of trying to re-implement the transport layer on top of the application layer.


Because firewalls are really stupid, and can be fouled into believing your app is a web browser (which I guess it kind of is at that point).


Out of curiosity, what's the reasoning behind going with C++? Just for fun or is there some advantage to C++ over other languages for sockets?


I believe it's more for client side coding not node.js socket.io implemented in c++...


Ah, that makes sense. And I think I might have found an answer to my own question:

"By virtue of being written in C++, this client works in several different platforms. The examples folder contains an iPhone, QT and Console example chat client!" - From the github repo


Why is the inclusion of boost touted like it's a good thing? Boost is fun to play around with but for a production library, no thanks.


Would love a C# implementation that I could use in Xamarin!


Why not use SignalR?


Right now I am using IPWorks WS client for C# and the simple 'ws' nodejs module, they are talking to each other just fine. Doesn't have the same capabilities as socket.io though (we have to handle reconnections a bit differently, no concept of channels, etc). Would SignalR talk to a nodejs socket.io server with these features?


GRPC.io - thats where the next generation of efficient multiplexed streams will come from. Lack of a client side lib and http/2 adoption means it's still a ways out.


That's a bold claim. RPC mechanisms is one of those spaces - like IRC clients, text editors and parser generators - that there seems to be more implementations of than there are extant users.


How does it differ from REST over HTTP/2?


Check out the FAQ http://www.grpc.io/faq/


Hope to see it soon in biicode!


Anybody else notice that Automattic (the wordpress company) now owns the github account for Socket.io? (https://github.com/Automattic/socket.io/) As well as mongoose, kue, expect.js and others... when and why did this happen?


LearnBoost/Guillermo made Socket.io; they had a startup under the same umbrella called CloudUp; Automattic acquired CloudUp/LearnBoost/Socket.io (2 years ago I think).


We do a lot more than just WordPress. :)

But yes, they came aboard with the LearnBoost / CloudUp acquisition nearly almost two years ago.

A lot of devs -- both pre-hire background and post-hire focus -- have little or nothing to do with the WordPress back-end.

http://automattic.com/work-with-us/


Well I guess LearnBoost,whatever it was, was "acquired" by Automattic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: