Hacker News new | past | comments | ask | show | jobs | submit login
ZeroMQ 4.2.0 (github.com/zeromq)
194 points by arunc on Nov 4, 2016 | hide | past | favorite | 37 comments



I'm confused. I thought 0MQ had essentially been superceded by nanomsg [0]. What's the deal?

On a related note, using nnpy [1] and mangos [0] have been a real pleasure.

And lastly, a bit of info on how 0MQ and nanomsg differ [3].

[0] http://nanomsg.org/ [1] https://github.com/nanomsg/nnpy [2] https://github.com/go-mangos/mangos [3] http://nanomsg.org/documentation-zeromq.html


Read the ZeroMQ manual. That's the most important part of ZMQ, as it's the clearest and best guide to your options for partitioning a task and arranging communication between software processes.

After that, you can implement your stuff in 0MQ, AMQP, nanomsg, or whatever else. I fully expect nanomsg and 0MQ to compete with each other on Github over the next 5 years or so, and I personally won't take sides.

For me, I use the manual to design programs I then write in Rust, a lot of it with no explicit message passing at all. (REQ-REP and PUSH PULL become simple function calls, sometimes mediated through thread spawning, and sometimes not, for example.)


Hmm, I'm not sure I understand exactly what you're saying.

The 0MQ manual is (indeed) excellent, but it seems like a design whitepaper. It essentially describes what are known as the scalability protocols. As you say, it's independent of the implementation, i.e.: the library.

My confusion largely stems from a presentation I watched [0] in which it was implied that 0MQ suffered from a few design issues, and had therefore been forked. It seems like the library was rewritten in the meantime, however, so perhaps these points are no longer relevant.

I suppose I'm looking for a bit of clarity as to why the fork occurred in the first place.


http://hintjens.com/blog:117#toc4

``For ZeroMQ, our stated mission was "Fastest. Messaging. Ever." This is a nice, and nearly impossible answer to a problem we could all agree on: namely, the slow, bloated technology available at that time. However, my co-founder Martin and I had conflicting goals. He wanted to build the best software possible, while I wanted to build the largest community possible. As the user base grew, his dramatic changes, which broke existing applications, caused increasing pain.

``In that case, we were able to make everyone happy (Martin went off to build a new library called "Nano").''


That fork was long enough ago that those issues are moot.


Martin Sustrik split because of differences, did a completely new implementation called nanomsg. He is no longer maintaining nanomsg. That task has fallen to someone else, but it doesn't seem to be too active at the moment.


Right, but my point was rather about the differences you mention. I was under the impression that 0MQ was superceded by nanomsg for technical reasons (UNIX-iness, 0MQ's threading model, etc).

Whence comes the impression that nanomsg is inactive?


>Whence comes the impression that nanomsg is inactive?

Because two 'main' maintainers so far abandoned it quite publicly, maybe someone picked it up afterwards.


Garrett D'Amore, who was previously a maintainer and is also the maintainer of mangos, returned in April on the condition he be BDFL.

See: https://github.com/nanomsg/nanomsg/issues/619


last commit 9 days ago: It's active.


Ah, I missed it.

The mangos project seems to be alive and well, in any case.



Not disagreeing with you or the postmortem, but:

1. that post is from Feb 8, 2016

2. nanomsg 1.0 was released on June 10, 2016

After nanomsg 1.0 release, recent activity on github includes:

8 days ago -- fixes #828 Add logical connections to UDP RFC

9 days ago -- fixes #827 nanomsg zerotier mapping rfc errors

11 days ago -- fixes #825 Extra tests in pipe.c gdamore committed 11 days ago

15 days ago -- fixes #800 accept4 not implemented on all systems

15 days ago -- fixes #783 WS transport - not connectable from Firefox

16 days ago -- fixes #821 NN_WS_HANDSHAKE_CRLF is silly

And etc.


After that, Garrett D'Amore eventually returned on condition he be BDFL. See https://github.com/nanomsg/nanomsg/issues/619

Since then the project looks to be fairly healthy.


Oh good! This is the guy behind mangos, too.


Hmm I missed it. In addition to the sibling comment above, the mangos project seems alive and well.


Yes you are confused. The 0MQ[0] have never been superceded by nanomsg. The deal as you are pointing out very clearly is that the two projects are alive and well.

On a related note, using jupyter[1], salt[2], circus[3] and building fruits have been a real pleasure too.

And lastly, a bit of info on how 0MQ and nanomsg differ[4].

[0]http://zeromq.org/ [1]http://jupyter.org/ [2]https://saltstack.com/ [3]https://github.com/circus-tent [4]http://hintjens.com/blog:112


I've been seeing the occasional post from his blog about the choice to die but didn't connect them to ZMQ until I saw this.

ZMQ, as a library, is a work of art and Code Connected is by and far one of the best programming books I have had the pleasure to read. That, coupled with the deep and interesting posts on his blog show we have lost a truly great mind.


"Tell them I was a writer.

A maker of software.

A humanist. A father.

And many things.

But above all, a writer.

Thank You. :)"

- Pieter Hintjens http://hintjens.com/


If you're just learning about ZeroMQ (or just reading this post) - I suggest taking some time to read or listen to some of Pieter Hintjens blogs/book/talks.

It's a wonderful experience and he seemed like a great/genuine dude.


I think I will use PC3[0] in my next project. Seems like an easy and sane way to structure repos.

[0] http://hintjens.com/blog:23


I really enjoyed Why Optimistic Merging Works Better[0] - and found it to be a very interesting topic and way to approach merging for projects.

[0] http://hintjens.com/blog:106


How would you review PC3 versus C4? It seems like C4 (https://rfc.zeromq.org/spec:42/C4) is the latest evolution of this design. It seems to embrace optimistic merging even more than PC3 and does away with reviewers.


Hm, like he wrote it, it seems to be the other way around

"The Pedantic Code Construction Contract (PC3) is an evolution of the GitHub Fork + Pull Model, and the ZeroMQ C4 process..."


That's a good point. I was basing it on dates I could find and what I've been reading in Social Architecture. Perhaps the latest C4 is later and PC3 is smack in the middle


I haven't used zeromq in forever, but did they ever fix the problem with request/reply sockets where the server socket could get into an indeterminate state after a client socket drops at just the wrong time?


Nope. The reality is that ZeroMQ is useful for a variety of tasks but doesn't really excel at the tasks for its specific socket types anymore. He offers a heart-beating pattern to get around this issue for Req/Rep sockets though.

For pub/sub Aeron is now much better (way more throughput and doesn't crash at multi-gigabit rates like OpenPGM). For REQ/REP HTTP/2 and other QUIC-based approaches are reigning supreme (if you need high performance across a WAN then you can repurpose something like FIXT 1.1 from the FIX protocol).


Looks like socket heartbeating has been added in this release of ZMQ. From what I can gather from the docs this should address the issue the parent post presents, but does anyone know definitively? See new ZMQ_HEARTBEAT_* options here [0] and Connection Heartbeating section here [1].

[0] http://api.zeromq.org/4-2:zmq-setsockopt [1] https://rfc.zeromq.org/spec:37/ZMTP/


> For REQ/REP HTTP/2 and other QUIC-based approaches are reigning supreme

Oh? I implemented something recently using req/rep using pyzmq and then ported it to grpc. grpc was an order of magnitude slower. Then I updated the zeromq code to do pipelining via router/dealer and that was even faster.. by sending pipelined batches of 100 items it can do 160k lookups/second. grpc+batching I think maxed out around 20k.

Could have been protobuf that was the cause of the performance hit though.


gRPC is and certainly will never be the fastest protocol for small request/reply messages. The reason is the stream multiplexing layer that is required for it. You almost certainly need to copy data from the connections receive buffer into a streams receive buffer into the application and the opposite for the sending side.

If you don't have the stream multiplexing and just write complete request or response packets to a connection (similar to Thrift) you save quite a lot of overhead.

However this multiplexing feature is also the biggest upside and achievement of gRPC, since it enables you to stream big requests or responses and not only small packets. And it enables multiple big streams (file uploads, etc.) in parallel over a single connection without one blocking another. And of course it enables flow-controlled bidirectional streaming IPC, which can not be found in other systems.


Well the underlying thing I am doing is small request/reply messages - I'm doing metadata lookup for ip addresses. The way I sped things up with zeromq was first by batching requests. Essentially, if I have 10k lookups to do, instead of sending 1 at a time, I group them into blocks of 100 and send

    ' '.join(block)
Then I do all the lookups on the server and send a block of responses back. This turns what would be 10k queries into only 100 rpc calls.

That got me to about 60k lookups a second locally, but over a wan link that dropped down to 10k. I fixed that by implementing pipelining using a method similar to the described under http://zguide.zeromq.org/page%3Aall#Transferring-Files where I keep the socket buffers busy by having 10 chunks in flight all the time.

That got things to 160k/s locally and 100k+/sec even over a slow link.

I'll have to mess with grpc a bit more. Looking at my grpc branch it looks like I tried using the request_iterator method first, then I tried a regular function that used batching, but I didn't try using request_iterator with batching. I think the biggest difference would be if request_iterator uses a pipeline, or if it still only does one req/reply behind the scenes.

I'm sure one thing that doesn't help is that

  message LookupRequest {
    string address = 1;
  }
  message LookupRequestBatch {
    repeated LookupRequest requests = 1;
  }
Ends up as a lot more overhead than doing ' '.join(batch)


Grpc in python is much slower than c++ or even java.


Yeah.. I figured as much.. zeromq in python is not slow though :-)

I could probably port the service to c++ or go, it's really just some string parsing and a hash table lookup of sorts.. but when my PoC python version does 160k lookups a second, I don't feel the need to spend the time :-)


"On python" can mean a few different things. It can mean a straight port, running in the python interpreter, or it can mean Cython (or similar) with all of the tight loops running as auto-generated compiled C code.

Numpy is a great example of this; all of the numerical operations are running on very fast compiled code, and being good at writing fast numpy involves knowing the ins and outs of how to minimize passing information between the slow python interpreter and the fast numerical engines. You want to just do all of the computation 'inside' of numpy, and then get the result at the end.


Yeah, I'm not sure how optimized the python protocol buffer stuff is. Years ago I benchmarked the pure python protobuf lib and it was terribly slow.

grpc was nice to work with though. I generated the stubs and stuck my logic in there and had a working client/server in about 20 minutes. The streaming request/reply stuff was crazy easy to use, though I don't know if it does pipelining.


That sounds like a problem that can happen with linux sockets in general, and may not be zeroMQ's fault.


Who took over zmq after Hintjens' untimely death?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: