PageRenderTime 63ms CodeModel.GetById 15ms RepoModel.GetById 1ms app.codeStats 0ms

/chapter2.txt

https://github.com/tsellon/zguide
Plain Text | 1518 lines | 1101 code | 417 blank | 0 comment | 0 complexity | 6b59a7cb96e60446d7a907be5898c4a3 MD5 | raw file

Large files files are truncated, but you can click here to view the full file

  1. .output chapter2.wd
  2. ++ Chapter Two - Intermediate Stuff
  3. In Chapter One we took 0MQ for a drive, with some basic examples of the main 0MQ patterns: request-reply, publish-subscribe, and pipeline. In this chapter we're going to get our hands dirty and start to learn how to use these tools in real programs.
  4. We'll cover:
  5. * How to create and work with 0MQ sockets.
  6. * How to send and receive messages on sockets.
  7. * How to build your apps around 0MQ's asynchronous I/O model.
  8. * How to handle multiple sockets in one thread.
  9. * How to handle fatal and non-fatal errors properly.
  10. * How to handle interrupt signals like Ctrl-C.
  11. * How to shutdown a 0MQ application cleanly.
  12. * How to check a 0MQ application for memory leaks.
  13. * How to send and receive multipart messages.
  14. * How to forward messages across networks.
  15. * How to build a simple message queuing broker.
  16. * How to write multithreaded applications with 0MQ.
  17. * How to use 0MQ to signal between threads.
  18. * How to use 0MQ to coordinate a network of nodes.
  19. * How to create durable sockets using socket identities.
  20. * How to create and use message envelopes for publish-subscribe.
  21. * How to do make durable subscribers that can recover from crashes.
  22. * Using the high-water mark (HWM) to protect against memory overflows.
  23. +++ The Zen of Zero
  24. The Ø in 0MQ is all about tradeoffs. On the one hand this strange name lowers 0MQ's visibility on Google and Twitter. On the other hand it annoys the heck out of some Danish folk who write us things like "ØMG røtfl", and "//Ø is not a funny looking zero!//" and "//Rødgrød med Fløde!//", which is apparently an insult that means "may your neighbours be the direct descendents of Grendel!" Seems like a fair trade.
  25. Originally the zero in 0MQ was meant as "zero broker" and (as close to) "zero latency" (as possible). In the meantime it has come to cover different goals: zero administration, zero cost, zero waste. More generally, "zero" refers to the culture of minimalism that permeates the project. We add power by removing complexity rather than exposing new functionality.
  26. +++ The Socket API
  27. To be perfectly honest, 0MQ does a kind of switch-and-bait on you. Which we don't apologize for, it's for your own good and hurts us more than it hurts you. It presents a familiar BSD socket API but that hides a bunch of message-processing machines that will slowly fix your world-view about how to design and write distributed software.
  28. Sockets are the de-facto standard API for network programming, as well as being useful for stopping your eyes from falling onto your cheeks. One thing that makes 0MQ especially tasty to developers is that it uses a standard socket API. Kudos to Martin Sustrik for pulling this idea off. It turns "Message Oriented Middleware", a phrase guaranteed to send the whole room off to Catatonia, into "Extra Spicy Sockets!" which leaves us with a strange craving for pizza, and a desire to know more.
  29. Like a nice pepperoni pizza, 0MQ sockets are easy to digest. Sockets have a life in four parts, just like BSD sockets:
  30. * Creating and destroying sockets, which go together to form a karmic circle of socket life (see zmq_socket[3], zmq_close[3]).
  31. * Configuring sockets by setting options on them and checking them if necessary (see zmq_setsockopt[3], zmq_getsockopt[3]).
  32. * Plugging sockets onto the network topology by creating 0MQ connections to and from them (see zmq_bind[3], zmq_connect[3]).
  33. * Using the sockets to carry data by writing and receiving messages on them (see zmq_send[3], zmq_recv[3]).
  34. Which looks like this, in C:
  35. [[code language="C"]]
  36. void *mousetrap;
  37. // Create socket for catching mice
  38. mousetrap = zmq_socket (context, ZMQ_PULL);
  39. // Configure the socket
  40. int64_t jawsize = 10000;
  41. zmq_setsockopt (mousetrap, ZMQ_HWM, &jawsize, sizeof jawsize);
  42. // Plug socket into mouse hole
  43. zmq_connect (mousetrap, "tcp://192.168.55.221:5001");
  44. // Wait for juicy mouse to arrive
  45. zmq_msg_t mouse;
  46. zmq_msg_init (&mouse);
  47. zmq_recv (mousetrap, &mouse, 0);
  48. // Destroy the mouse
  49. zmq_msg_close (&mouse);
  50. // Destroy the socket
  51. zmq_close (mousetrap);
  52. [[/code]]
  53. Note that sockets are always void pointers, and messages (which we'll come to very soon) are structures. So in C you pass sockets as-such, but you pass addresses of messages in all functions that work with messages, like zmq_send[3] and zmq_recv[3]. As a mnemonic, realize that "in 0MQ all ur sockets are belong to us", but messages are things you actually own in your code.
  54. Creating, destroying, and configuring sockets works as you'd expect for any object. But remember that 0MQ is an asynchronous, elastic fabric. This has some impact on how we plug sockets into the network topology, and how we use the sockets after that.
  55. +++ Plugging Sockets Into the Topology
  56. To create a connection between two nodes you use zmq_bind[3] in one node, and zmq_connect[3] in the other. As a general rule of thumb, the node which does zmq_bind[3] is a "server", sitting on a well-known network address, and the node which does zmq_connect[3] is a "client", with unknown or arbitrary network addresses. Thus we say that we "bind a socket to an endpoint" and "connect a socket to an endpoint", the endpoint being that well-known network address.
  57. 0MQ connections are somewhat different from old-fashioned TCP connections. The main notable differences are:
  58. * They go across an arbitrary transport ({{inproc}}, {{ipc}}, {{tcp}}, {{pgm}} or {{epgm}}). See zmq_inproc[7], zmq_ipc[7], zmq_tcp[7], zmq_pgm[7], and zmq_epgm[7].
  59. * They exist when a client does zmq_connect[3] to an endpoint, whether or not a server has already done zmq_bind[3] to that endpoint.
  60. * They are asynchronous, and have queues that magically exist where and when needed.
  61. * They may express a certain "messaging pattern", according to the type of socket used at each end.
  62. * One socket may have many outgoing and many incoming connections.
  63. * There is no zmq_accept() method. When a socket is bound to an endpoint it automatically starts accepting connections.
  64. * Your application code cannot work with these connections directly; they are encapsulated under the socket.
  65. Many architectures follow some kind of client-server model, where the server is the component that is most stable, and the clients are the components that are most dynamic, i.e. they come and go the most. There are sometimes issues of addressing: servers will be visible to clients, but not necessarily vice-versa. So mostly it's obvious which node should be doing zmq_bind[3] (the server) and which should be doing zmq_connect[3] (the client). It also depends on the kind of sockets you're using, with some exceptions for unusual network architectures. We'll look at socket types later.
  66. Now, imagine we start the client //before// we start the server. In traditional networking we get a big red Fail flag. But 0MQ lets us start and stop pieces arbitrarily. As soon as the client node does zmq_connect[3] the connection exists and that node can start to write messages to the socket. At some stage (hopefully before messages queue up so much that they start to get discarded, or the client blocks), the server comes alive, does a zmq_bind[3] and 0MQ starts to deliver messages.
  67. A server node can bind to many endpoints and it can do this using a single socket. This means it will accept connections across different transports:
  68. [[code language="C"]]
  69. zmq_bind (socket, "tcp://*:5555");
  70. zmq_bind (socket, "tcp://*:9999");
  71. zmq_bind (socket, "ipc://myserver.ipc");
  72. [[/code]]
  73. You cannot bind to the same endpoint twice, that will cause an exception.
  74. Each time a client node does a zmq_connect[3] to any of these endpoints, the server node's socket gets another connection. There is no inherent limit to how many connections a socket can have. A client node can also connect to many endpoints using a single socket.
  75. In most cases, which node acts as client, and which as server, is about network topology rather than message flow. However, there //are// cases (resending when connections are broken) where the same socket type will behave differently if it's a server or if it's a client.
  76. What this means is that you should always think in terms of "servers" as stable parts of your topology, with more-or-less fixed endpoint addresses, and "clients" as dynamic parts that come and go. Then, design your application around this model. The chances that it will "just work" are much better like that.
  77. Sockets have types. The socket type defines the semantics of the socket, its policies for routing messages inwards and outwards, queueing, etc. You can connect certain types of socket together, e.g. a publisher socket and a subscriber socket. Sockets work together in "messaging patterns". We'll look at this in more detail later.
  78. It's the ability to connect sockets in these different ways that gives 0MQ its basic power as a message queuing system. There are layers on top of this, such as devices and topic routing, which we'll get to later. But essentially, with 0MQ you define your network architecture by plugging pieces together like a child's construction toy.
  79. +++ Using Sockets to Carry Data
  80. To send and receive messages you use the zmq_send[3] and zmq_recv[3] methods. The names are conventional but 0MQ's I/O model is different enough from TCP's model that you will need time to get your head around it.
  81. [[code type="textdiagram"]]
  82. +------------+
  83. | |
  84. | Node |
  85. | |
  86. +------------+
  87. | Socket |
  88. \------------/
  89. ^
  90. |
  91. 1 to 1
  92. |
  93. v
  94. /------------\
  95. | Socket |
  96. +------------+
  97. | |
  98. | Node |
  99. | |
  100. +------------+
  101. Figure # - TCP sockets are 1 to 1
  102. [[/code]]
  103. Let's look at the main differences between TCP sockets and 0MQ sockets when it comes to carrying data:
  104. * 0MQ sockets carry messages, rather than bytes (as in TCP) or frames (as in UDP). A message is a length-specified blob of binary data. We'll come to messages shortly, their design is optimized for performance and thus somewhat tricky to understand.
  105. * 0MQ sockets do their I/O in a background thread. This means that messages arrive in a local input queue, and are sent from a local output queue, no matter what your application is busy doing. These are configurable memory queues, by the way.
  106. * 0MQ sockets can, depending on the socket type, be connected to (or from, it's the same) many other sockets. Where TCP emulates a one-to-one phone call, 0MQ implements one-to-many (like a radio broadcast), many-to-many (like a post office), many-to-one (like a mail box), and even one-to-one.
  107. * 0MQ sockets can send to many endpoints (creating a fan-out model), or receive from many endpoints (creating a fan-in model).
  108. [[code type="textdiagram"]]
  109. +------------+ +------------+
  110. | | | |
  111. | Node | | Node |
  112. | | | |
  113. +------------+ +------------+
  114. | Socket | | Socket |
  115. \----+-+-----/ \------+-----/
  116. | | :
  117. 1 to N | +------------------------+
  118. Fan out | |
  119. +------------------------+ | N to 1
  120. | | | Fan in
  121. v v v
  122. /------------\ /------------\
  123. | Socket | | Socket |
  124. +------------+ +------------+
  125. | | | |
  126. | Node | | Node |
  127. | | | |
  128. +------------+ +------------+
  129. Figure # - 0MQ sockets are N to N
  130. [[/code]]
  131. So writing a message to a socket may send the message to one or many other places at once, and conversely, one socket will collect messages from all connections sending messages to it. The zmq_recv[3] method uses a fair-queuing algorithm so each sender gets an even chance.
  132. The zmq_send[3] method does not actually send the message to the socket connection(s). It queues the message so that the I/O thread can send it asynchronously. It does not block except in some exception cases. So the message is not necessarily sent when zmq_send[3] returns to your application. If you created a message using zmq_msg_init_data[3] you cannot reuse the data or free it, otherwise the I/O thread will rapidly find itself writing overwritten or unallocated garbage. This is a common mistake for beginners. We'll see a little later how to properly work with messages.
  133. +++ Unicast Transports
  134. 0MQ provides a set of unicast transports ({{inproc}}, {{ipc}}, and {{tcp}}) and multicast transports (epgm, pgm). Multicast is an advanced technique that we'll come to later. Don't even start using it unless you know that your fanout ratios will make 1-to-N unicast impossible.
  135. For most common cases, use **{{tcp}}**, which is a //disconnected TCP// transport. It is elastic, portable, and fast enough for most cases. We call this 'disconnected' because 0MQ's {{tcp}} transport doesn't require that the endpoint exists before you connect to it. Clients and servers can connect and bind at any time, can go and come back, and it remains transparent to applications.
  136. The inter-process transport, **{{ipc}}**, is like {{tcp}} except that it is abstracted from the LAN, so you don't need to specify IP addresses or domain names. This makes it better for some purposes, and we use it quite often in the examples in this book. 0MQ's {{ipc}} transport is disconnected, like {{tcp}}. It has one limitation: it does not work on Windows. This may be fixed in future versions of 0MQ. By convention we use endpoint names with an ".ipc" extension to avoid potential conflict with other file names. On UNIX systems, if you use {{ipc}} endpoints you need to create these with appropriate permissions otherwise they may not be shareable between processes running under different user ids. You must also make sure all processes can access the files, e.g. by running in the same working directory.
  137. The inter-thread transport, **{{inproc}}**, is a connected signaling transport. It is much faster than {{tcp}} or {{ipc}}. This transport has a specific limitation compared to {{ipc}} and {{tcp}}: **you must do bind before connect**. This is something future versions of 0MQ may fix, but at present this defines you use {{inproc}} sockets. We create and bind one socket, start the child threads, which create and connect the other sockets.
  138. +++ 0MQ is Not a Neutral Carrier
  139. A common question that newcomers to 0MQ ask (it's one I asked myself) is something like, "//how do I write a XYZ server in 0MQ?//" For example, "how do I write an HTTP server in 0MQ?"
  140. The implication is that if we use normal sockets to carry HTTP requests and responses, we should be able to use 0MQ sockets to do the same, only much faster and better.
  141. Sadly the answer is "this is not how it works". 0MQ is not a neutral carrier, it imposes a framing on the transport protocols it uses. This framing is not compatible with existing protocols, which tend to use their own framing. For example, here is an HTTP request, and a 0MQ request, both over TCP/IP:
  142. [[code type="textdiagram"]]
  143. +----------------+----+----+----+----+
  144. | GET /index.html| 13 | 10 | 13 | 10 |
  145. +----------------+----+----+----+----+
  146. Figure # - HTTP request
  147. [[/code]]
  148. Where the HTTP request uses CR-LF as its simplest framing delimiter, and 0MQ uses a length-specified frame:
  149. [[code type="textdiagram"]]
  150. +---+---+---+---+---+---+
  151. | 5 | H | E | L | L | O |
  152. +---+---+---+---+---+---+
  153. Figure # - 0MQ request
  154. [[/code]]
  155. So you could write a HTTP-like protocol using 0MQ, using for example the request-reply socket pattern. But it would not be HTTP.
  156. There is however a good answer to the question, "how can I make profitable use of 0MQ when making my new XYZ server?" You need to implement whatever protocol you want to speak in any case, but you can connect that protocol server (which can be extremely thin) to a 0MQ backend that does the real work. The beautiful part here is that you can then extend your backend with code in any language, running locally or remotely, as you wish. Zed Shaw's [http://www.mongrel2.org Mongrel2] web server is a great example of such an architecture.
  157. +++ I/O Threads
  158. We said that 0MQ does I/O in a background thread. One I/O thread (for all sockets) is sufficient for all but the most extreme applications. This is the magic '1' that we use when creating a context, meaning "use one I/O thread":
  159. [[code language="C"]]
  160. void *context = zmq_init (1);
  161. [[/code]]
  162. There is a major difference between a 0MQ application and a conventional networked application, which is that you don't create one socket per connection. One socket handles all incoming and outcoming connections for a particular point of work. E.g. when you publish to a thousand subscribers, it's via one socket. When you distribute work among twenty services, it's via one socket. When you collect data from a thousand web applications, it's via one socket.
  163. This has a fundamental impact on how you write applications. A traditional networked application has one process or one thread per remote connection, and that process or thread handles one socket. 0MQ lets you collapse this entire structure into a single thread, and then break it up as necessary for scaling.
  164. +++ Core Messaging Patterns
  165. Underneath the brown paper wrapping of 0MQ's socket API lies the world of messaging patterns. If you have a background in enterprise messaging, these will be vaguely familiar. But to most 0MQ newcomers they are a surprise, we're so used to the TCP paradigm where a socket represents another node.
  166. Let's recap briefly what 0MQ does for you. It delivers blobs of data (messages) to nodes, quickly and efficiently. You can map nodes to threads, processes, or boxes. It gives your applications a single socket API to work with, no matter what the actual transport (like in-process, inter-process, TCP, or multicast). It automatically reconnects to peers as they come and go. It queues messages at both sender and receiver, as needed. It manages these queues carefully to ensure processes don't run out of memory, overflowing to disk when appropriate. It handles socket errors. It does all I/O in background threads. It uses lock-free techniques for talking between nodes, so there are never locks, waits, semaphores, or deadlocks.
  167. But cutting through that, it routes and queues messages according to precise recipes called //patterns//. It is these patterns that provide 0MQ's intelligence. They encapsulate our hard-earned experience of the best ways to distribute data and work. 0MQ's patterns are hard-coded but future versions may allow user-definable patterns.
  168. 0MQ patterns are implemented by pairs of sockets with matching types. In other words, to understand 0MQ patterns you need to understand socket types and how they work together. Mostly this just takes learning, there is little that is obvious at this level.
  169. The built-in core 0MQ patterns are:
  170. * **Request-reply**, which connects a set of clients to a set of services. This is a remote procedure call and task distribution pattern.
  171. * **Publish-subscribe**, which connects a set of publishers to a set of subscribers. This is a data distribution pattern.
  172. * **Pipeline**, connects nodes in a fan-out / fan-in pattern that can have multiple steps, and loops. This is a parallel task distribution and collection pattern.
  173. We looked at each of these in the first chapter. There's one more pattern that people tend to try to use when they still think of 0MQ in terms of traditional TCP sockets:
  174. * **Exclusive pair**, which connects two sockets in an exclusive pair. This is a low-level pattern for specific, advanced use-cases. We'll see an example at the end of this chapter.
  175. The zmq_socket[3] man page is fairly clear about the patterns, it's worth reading several times until it starts to make sense. We'll look at each pattern and the use-cases it covers.
  176. These are the socket combinations that are valid for a connect-bind pair (either side can bind):
  177. * PUB and SUB
  178. * REQ and REP
  179. * REQ and ROUTER
  180. * DEALER and REP
  181. * DEALER and ROUTER
  182. * DEALER and DEALER
  183. * ROUTER and ROUTER
  184. * PUSH and PULL
  185. * PAIR and PAIR
  186. Any other combination will produce undocumented and unreliable results and future versions of 0MQ will probably return errors if you try them. You can and will of course bridge other socket types //via code//, i.e. read from one socket type and write to another.
  187. +++ High-level Messaging Patterns
  188. These four core patterns are cooked-in to 0MQ. They are part of the 0MQ API, implemented in the core C++ library, and guaranteed to be available in all fine retail stores. If one day the Linux kernel includes 0MQ, for example, these patterns would be there.
  189. On top, we add //high-level patterns//. We build these high-level patterns on top of 0MQ and implement them in whatever language we're using for our application. They are not part of the core library, do not come with the 0MQ package, and exist in their own space, as part of the 0MQ community.
  190. One of the things we aim to provide you with this guide are a set of such high-level patterns, both small (how to handle messages sanely) to large (how to make a reliable publish-subscribe architecture).
  191. +++ Working with Messages
  192. On the wire, 0MQ messages are blobs of any size from zero upwards, fitting in memory. You do your own serialization using Google Protocol Buffers, XDR, JSON, or whatever else your applications need to speak. It's wise to choose a data representation that is portable and fast, but you can make your own decisions about trade-offs.
  193. In memory, 0MQ messages are zmq_msg_t structures (or classes depending on your language). Here are the basic ground rules for using 0MQ messages in C:
  194. * You create and pass around zmq_msg_t objects, not blocks of data.
  195. * To read a message you use zmq_msg_init[3] to create an empty message, and then you pass that to zmq_recv[3].
  196. * To write a message from new data, you use zmq_msg_init_size[3] to create a message and at the same time allocate a block of data of some size. You then fill that data using memcpy, and pass the message to zmq_send[3].
  197. * To release (not destroy) a message you call zmq_msg_close[3]. This drops a reference, and eventually 0MQ will destroy the message.
  198. * To access the message content you use zmq_msg_data[3]. To know how much data the message contains, use zmq_msg_size[3].
  199. * Do not use zmq_msg_move[3], zmq_msg_copy[3], or zmq_msg_init_data[3] unless you read the man pages and know precisely why you need these.
  200. Here is a typical chunk of code working with messages, which should be familiar if you have been paying attention. This is from the zhelpers.h file we use in all the examples:
  201. [[code language="C"]]
  202. // Receive 0MQ string from socket and convert into C string
  203. static char *
  204. s_recv (void *socket) {
  205. zmq_msg_t message;
  206. zmq_msg_init (&message);
  207. zmq_recv (socket, &message, 0);
  208. int size = zmq_msg_size (&message);
  209. char *string = malloc (size + 1);
  210. memcpy (string, zmq_msg_data (&message), size);
  211. zmq_msg_close (&message);
  212. string [size] = 0;
  213. return (string);
  214. }
  215. // Convert C string to 0MQ string and send to socket
  216. static int
  217. s_send (void *socket, char *string) {
  218. int rc;
  219. zmq_msg_t message;
  220. zmq_msg_init_size (&message, strlen (string));
  221. memcpy (zmq_msg_data (&message), string, strlen (string));
  222. rc = zmq_send (socket, &message, 0);
  223. assert (!rc);
  224. zmq_msg_close (&message);
  225. return (rc);
  226. }
  227. [[/code]]
  228. You can easily extend this code to send and receive blobs of arbitrary length.
  229. **Note than when you have passed a message to zmq_send(3), ØMQ will clear the message, i.e. set the size to zero. You cannot send the same message twice, and you cannot access the message data after sending it.**
  230. If you want to send the same message more than once, create a second message, initialize it using zmq_msg_init[3] and then use zmq_msg_copy[3] to create a copy of the first message. This does not copy the data but the reference. You can then send the message twice (or more, if you create more copies) and the message will only be finally destroyed when the last copy is sent or closed.
  231. 0MQ also supports //multipart// messages, which let you handle a list of blobs as a single message. This is widely used in real applications and we'll look at that later in this chapter and in Chapter Three.
  232. Some other things that are worth knowing about messages:
  233. * 0MQ sends and receives them atomically, i.e. you get a whole message, or you don't get it at all.
  234. * 0MQ does not send a message right away but at some indeterminate later time.
  235. * You can send zero-length messages, e.g. for sending a signal from one thread to another.
  236. * A message must fit in memory. If you want to send files of arbitrary sizes, you should break them into pieces and send each piece as a separate message.
  237. * You must call zmq_msg_close[3] when finished with a message, in languages that don't automatically destroy objects when a scope closes.
  238. And to be necessarily repetitive, do not use zmq_msg_init_data[3], yet. This is a zero-copy method and guaranteed to create trouble for you. There are far more important things to learn about 0MQ before you start to worry about shaving off microseconds.
  239. +++ Handling Multiple Sockets
  240. In all the examples so far, the main loop of most examples has been:
  241. # wait for message on socket
  242. # process message
  243. # repeat
  244. What if we want to read from multiple sockets at the same time? The simplest way is to connect one socket to multiple endpoints and get 0MQ to do the fanin for us. This is legal if the remote endpoints are in the same pattern but it would be illegal to e.g. connect a PULL socket to a PUB endpoint. Fun, but illegal. If you start mixing patterns you break future scalability.
  245. The right way is to use zmq_poll[3]. An even better way might be to wrap zmq_poll[3] in a framework that turns it into a nice event-driven //reactor//, but it's significantly more work than we want to cover here.
  246. Let's start with a dirty hack, partly for the fun of not doing it right, but mainly because it lets me show you how to do non-blocking socket reads. Here is a simple example of reading from two sockets using non-blocking reads. This rather confused program acts both as a subscriber to weather updates, and a worker for parallel tasks:
  247. [[code type="example" title="Multiple socket reader" name="msreader"]]
  248. [[/code]]
  249. The cost of this approach is some additional latency on the first message (the sleep at the end of the loop, when there are no waiting messages to process). This would be a problem in applications where sub-millisecond latency was vital. Also, you need to check the documentation for nanosleep() or whatever function you use to make sure it does not busy-loop.
  250. You can treat the sockets fairly by reading first from one, then the second rather than prioritizing them as we did in this example. This is called "fair-queuing", something that 0MQ does automatically when one socket receives messages from more than one source.
  251. Now let's see the same little senseless application done right, using zmq_poll[3]:
  252. [[code type="example" title="Multiple socket poller" name="mspoller"]]
  253. [[/code]]
  254. +++ Handling Errors and ETERM
  255. 0MQ's error handling philosophy is a mix of fail-fast and resilience. Processes, we believe, should be as vulnerable as possible to internal errors, and as robust as possible against external attacks and errors. To give an analogy, a living cell will self-destruct if it detects a single internal error, yet it will resist attack from the outside by all means possible. Assertions, which pepper the 0MQ code, are absolutely vital to robust code, they just have to be on the right side of the cellular wall. And there should be such a wall. If it is unclear whether a fault is internal or external, that is a design flaw that needs to be fixed.
  256. In C, assertions stop the application immediately with an error. In other languages you may get exceptions or halts.
  257. When 0MQ detects an external fault it returns an error to the calling code. In some rare cases it drops messages silently, if there is no obvious strategy for recovering from the error. In a few places 0MQ still asserts on external faults, but these are considered bugs.
  258. In most of the C examples we've seen so far there's been no error handling. **Real code should do error handling on every single 0MQ call**. If you're using a language binding other than C, the binding may handle errors for you. In C you do need to do this yourself. There are some simple rules, starting with POSIX conventions:
  259. * Methods that create objects will return NULL in case they fail.
  260. * Other methods will return 0 on success and other values (mostly -1) on an exceptional condition (usually failure).
  261. * The error code is provided in {{errno}} or zmq_errno[3].
  262. * A descriptive error text for logging is provided by zmq_strerror[3].
  263. There are two main exceptional conditions that you may want to handle as non-fatal:
  264. * When a thread calls zmq_recv[3] with the NOBLOCK option and there is no waiting data. 0MQ will return -1 and set errno to EAGAIN.
  265. * When a thread calls zmq_term[3] and other threads are doing blocking work. The zmq_term[3] call closes the context and all blocking calls exit with -1, and errno set to ETERM.
  266. What this boils down to is that in most cases you can use assertions on 0MQ calls, like this, in C:
  267. [[code language="C"]]
  268. void *context = zmq_init (1);
  269. assert (context);
  270. void *socket = zmq_socket (context, ZMQ_REP);
  271. assert (socket);
  272. int rc;
  273. rc = zmq_bind (socket, "tcp://*:5555");
  274. assert (rc == 0);
  275. [[/code]]
  276. In the first version of this code I put the assert() call around the function. Not a good idea, since an optimized build will turn all assert() macros to null and happily wallop those functions. Use a return code, and assert the return code.
  277. Let's see how to shut down a process cleanly. We'll take the parallel pipeline example from the previous section. If we've started a whole lot of workers in the background, we now want to kill them when the batch is finished. Let's do this by sending a kill message to the workers. The best place to do this is the sink, since it really knows when the batch is done.
  278. How do we connect the sink to the workers? The PUSH/PULL sockets are one-way only. The standard 0MQ answer is: create a new socket flow for each type of problem you need to solve. We'll use a publish-subscribe model to send kill messages to the workers:
  279. * The sink creates a PUB socket on a new endpoint.
  280. * Workers bind their input socket to this endpoint.
  281. * When the sink detects the end of the batch it sends a kill to its PUB socket.
  282. * When a worker detects this kill message, it exits.
  283. It doesn't take much new code in the sink:
  284. [[code language="C"]]
  285. void *control = zmq_socket (context, ZMQ_PUB);
  286. zmq_bind (control, "tcp://*:5559");
  287. ...
  288. // Send kill signal to workers
  289. zmq_msg_init_data (&message, "KILL", 5);
  290. zmq_send (control, &message, 0);
  291. zmq_msg_close (&message);
  292. [[/code]]
  293. [[code type="textdiagram"]]
  294. +-------------+
  295. | |
  296. | Ventilator |
  297. | |
  298. +-------------+
  299. | PUSH |
  300. \------+------/
  301. |
  302. tasks
  303. |
  304. +---------------+---------------+
  305. | | |
  306. | /=--------|-----+=--------|-----+------\
  307. task | task | task | :
  308. | | | | | | |
  309. v v v v v v |
  310. /------+-----\ /------+-----\ /------+-----\ |
  311. | PULL | SUB | | PULL | SUB | | PULL | SUB | |
  312. +------+-----+ +------+-----+ +------+-----+ |
  313. | | | | | | |
  314. | Worker | | Worker | | Worker | |
  315. | | | | | | |
  316. +------------+ +------------+ +------------+ |
  317. | PUSH | | PUSH | | PUSH | |
  318. \-----+------/ \-----+------/ \-----+------/ |
  319. | | | |
  320. result result result |
  321. | | | |
  322. +---------------+---------------+ |
  323. | |
  324. results |
  325. | |
  326. v |
  327. /-------------\ |
  328. | PULL | |
  329. +-------------+ |
  330. | | |
  331. | Sink | |
  332. | | |
  333. +-------------+ |
  334. | PUB | |
  335. \------+------/ |
  336. | |
  337. KILL signal |
  338. | |
  339. \--------------------------/
  340. Figure # - Parallel Pipeline with Kill signaling
  341. [[/code]]
  342. Here is the worker process, which manages two sockets (a PULL socket getting tasks, and a SUB socket getting control commands) using the zmq_poll[3] technique we saw earlier:
  343. [[code type="example" title="Parallel task worker with kill signaling" name="taskwork2"]]
  344. [[/code]]
  345. Here is the modified sink application. When it's finished collecting results it broadcasts a KILL message to all workers:
  346. [[code type="example" title="Parallel task sink with kill signaling" name="tasksink2"]]
  347. [[/code]]
  348. +++ Handling Interrupt Signals
  349. Realistic applications need to shutdown cleanly when interrupted with Ctrl-C or another signal such as SIGTERM. By default, these simply kill the process, meaning messages won't be flushed, files won't be closed cleanly, etc.
  350. Here is how we handle a signal in various languages:
  351. [[code type="example" title="Handling Ctrl-C cleanly" name="interrupt"]]
  352. [[/code]]
  353. The program provides s_catch_signals(), which traps Ctrl-C (SIGINT) and SIGTERM. When either of these signals arrive, the s_catch_signals() handler sets the global variable s_interrupted. Your application will not die automatically, you have to now explicitly check for an interrupt, and handle it properly. Here's how:
  354. * Call s_catch_signals() (copy this from interrupt.c) at the start of your main code. This sets-up the signal handling.
  355. * If your code is blocking in zmq_recv[3], zmq_poll[3], or zmq_send[3], when a signal arrives, the call will return with EINTR.
  356. * Wrappers like s_recv() return NULL if they are interrupted.
  357. * So, your application checks for an EINTR return code, a NULL return, and/or s_interrupted.
  358. Here is a typical code fragment:
  359. [[code]]
  360. s_catch_signals ();
  361. client = zmq_socket (...);
  362. while (!s_interrupted) {
  363. char *message = s_recv (client);
  364. if (!message)
  365. break; // Ctrl-C used
  366. }
  367. zmq_close (client);
  368. [[/code]]
  369. If you call s_catch_signals() and don't test for interrupts, the your application will become immune to Ctrl-C and SIGTERM, which may be useful, but is usually not.
  370. +++ Detecting Memory Leaks
  371. Any long-running application has to manage memory correctly, or eventually it'll use up all available memory and crash. If you use a language that handles this automatically for you, congratulations. If you program in C or C++ or any other language where you're responsible for memory management, here's a short tutorial on using valgrind, which among other things will report on any leaks your programs have.
  372. * To install valgrind, e.g. on Ubuntu or Debian: {{sudo apt-get install valgrind}}.
  373. * By default, 0MQ will cause valgrind to complain a lot. To remove these warnings, rebuild 0MQ with the ZMQ_MAKE_VALGRIND_HAPPY macro, thus:
  374. [[code]]
  375. $ cd zeromq2
  376. $ export CPPFLAGS=-DZMQ_MAKE_VALGRIND_HAPPY
  377. $ ./configure
  378. $ make clean; make
  379. $ sudo make install
  380. [[/code]]
  381. * Fix your applications to exit cleanly after Ctrl-C. For any application that exits by itself, that's not needed, but for long-running applications (like devices), this is essential, otherwise valgrind will complain about all currently allocated memory.
  382. * Build your application with -DDEBUG, if it's not your default setting. That ensures valgrind can tell you exactly where memory is being leaked.
  383. * Finally, run valgrind thus:
  384. [[code]]
  385. valgrind --tool=memcheck --leak-check=full someprog
  386. [[/code]]
  387. And after fixing any errors it reported, you should get the pleasant message:
  388. [[code]]
  389. ==30536== ERROR SUMMARY: 0 errors from 0 contexts...
  390. [[/code]]
  391. +++ Multipart Messages
  392. 0MQ lets us compose a message out of several frames, giving us a 'multipart message'. Realistic applications use multipart messages heavily, especially to make "envelopes". We'll look at them later. What we'll learn now is simply how to safely (but blindly) read and write multipart messages because otherwise the devices we write won't work with applications that use multipart messages.
  393. When you work with multipart messages, each part is a zmq_msg item. E.g. if you are sending a message with five parts, you must construct, send, and destroy five zmq_msg items. You can do this in advance (and store the zmq_msg items in an array or structure), or as you send them, one by one.
  394. Here is how we send the frames in a multipart message (we receive each frame into a message object):
  395. [[code language="C"]]
  396. zmq_send (socket, &message, ZMQ_SNDMORE);
  397. ...
  398. zmq_send (socket, &message, ZMQ_SNDMORE);
  399. ...
  400. zmq_send (socket, &message, 0);
  401. [[/code]]
  402. Here is how we receive and process all the parts in a message, be it single part or multipart:
  403. [[code language="C"]]
  404. while (1) {
  405. zmq_msg_t message;
  406. zmq_msg_init (&message);
  407. zmq_recv (socket, &message, 0);
  408. // Process the message part
  409. zmq_msg_close (&message);
  410. int64_t more;
  411. size_t more_size = sizeof (more);
  412. zmq_getsockopt (socket, ZMQ_RCVMORE, &more, &more_size);
  413. if (!more)
  414. break; // Last message part
  415. }
  416. [[/code]]
  417. Some things to know about multipart messages:
  418. * When you send a multipart message, the first part (and all following parts) are only sent when you send the final part.
  419. * If you are using zmq_poll[3], when you receive the first part of a message, all the rest has also arrived.
  420. * You will receive all parts of a message, or none at all.
  421. * Each part of a message is a separate zmq_msg item.
  422. * You will receive all parts of a message whether or not you check the RCVMORE option.
  423. * On sending, 0MQ queues message parts in memory until the last is received, then sends them all.
  424. * There is no way to cancel a partially sent message, except by closing the socket.
  425. +++ Intermediates and Devices
  426. Any connected set hits a complexity curve as the number of set members increases. A small number of members can all know about each other but as the set gets larger, the cost to each member of knowing all other interesting members grows linearly, and the overall cost of connecting members grows factorially. The solution is to break sets into smaller ones, and use intermediates to connect the sets.
  427. This pattern is extremely common in the real world and is why our societies and economies are filled with intermediaries who have no other real function than to reduce the complexity and scaling costs of larger networks. Intermediaries are typically called wholesalers, distributors, managers, etc.
  428. A 0MQ network like any cannot grow beyond a certain size without needing intermediaries. In 0MQ, we call these "devices". When we use 0MQ we usually start building our applications as a set of nodes on a network with the nodes talking to each other, without intermediaries:
  429. [[code type="textdiagram"]]
  430. +---------+
  431. | |
  432. | Node |
  433. | |
  434. +---------+
  435. | Socket |
  436. \----+----/
  437. |
  438. |
  439. +------+------+
  440. | |
  441. | |
  442. /----+----\ /----+----\
  443. | Socket | | Socket |
  444. +---------+ +---------+
  445. | | | |
  446. | Node | | Node |
  447. | | | |
  448. +---------+ +---------+
  449. Figure # - Small scale 0MQ application
  450. [[/code]]
  451. And then we extend the application across a wider network, placing devices in specific places and scaling up the number of nodes:
  452. [[code type="textdiagram"]]
  453. +---------+
  454. | |
  455. | Node |
  456. | |
  457. +---------+
  458. | Socket |
  459. \----+----/
  460. |
  461. |
  462. +-------------+-------------+
  463. | | |
  464. | | |
  465. /----+----\ /----+----\ /----+----\
  466. | Socket | | Socket | | Socket |
  467. +---------+ +---------+ +---------+
  468. | | | | | |
  469. | Node | | Node | | Device |
  470. | | | | | |
  471. +---------+ +---------+ +---------+
  472. | Socket |
  473. \----+----/
  474. |
  475. |
  476. +------+------+
  477. | |
  478. | |
  479. /----+----\ /----+----\
  480. | Socket | | Socket |
  481. +---------+ +---------+
  482. | | | |
  483. | Node | | Node |
  484. | | | |
  485. +---------+ +---------+
  486. Figure # - Larger scale 0MQ application
  487. [[/code]]
  488. 0MQ devices generally connect a set of 'frontend' sockets to a set of 'backend' sockets, though there are no strict design rules. They ideally run with no state, so that it becomes possible to stretch applications over as many intermediates as needed. You can run them as threads within a process, or as stand-alone processes. 0MQ provides some very basic devices but you will in practice develop your own.
  489. 0MQ devices can do intermediation of addresses, services, queues, or any other abstraction you care to define above the message and socket layers. Different messaging patterns have different complexity issues and need different kinds of intermediation. For example, request-reply works well with queue and service abstractions, while publish-subscribe works well with streams or topics.
  490. What's interesting about 0MQ as compared to traditional centralized brokers is that you can place devices precisely where you need them, and they can do the optimal intermediation.
  491. ++++ A Publish-Subscribe Proxy Server
  492. It is a common requirement to extend a publish-subscribe architecture over more than one network segment or transport. Perhaps there are a group of subscribers sitting at a remote location. Perhaps we want to publish to local subscribers via multicast, and to remote subscribers via TCP.
  493. We're going to write a simple proxy server that sits in between a publisher and a set of subscribers, bridging two networks. This is perhaps the simplest case of a useful device. The device has two sockets, a frontend facing the internal network, where the weather server is sitting, and a backend facing subscribers on the external network. It subscribes to the weather service on the frontend socket, and republishes its data on the backend socket:
  494. [[code type="example" title="Weather update proxy" name="wuproxy"]]
  495. [[/code]]
  496. We call this a //proxy// because it acts as a subscriber to publishers, and acts as a publisher to subscribers. That means you can slot this device into an existing network without affecting it (of course the new subscribers need to know to speak to the proxy).
  497. [[code type="textdiagram"]]
  498. +-----------+
  499. | |
  500. | Publisher |
  501. | |
  502. +-----------+
  503. | PUB |
  504. \-----------/
  505. bind
  506. tcp://192.168.55.210:5556
  507. |
  508. |
  509. +----------------+----------------+
  510. | | |
  511. | | |
  512. connect connect |
  513. /------------\ /------------\ connect
  514. | SUB | | SUB | /------------\
  515. +------------+ +------------+ | SUB |
  516. | | | | +------------+
  517. | Subscriber | | Subscriber | | |
  518. | | | | | Forwarder |
  519. +------------+ +------------+ | |
  520. +------------+
  521. Internal network | PUB |
  522. ---------------------------------\------------/--------
  523. External network bind
  524. tcp://10.1.1.0:8100
  525. |
  526. |
  527. +--------+--------+
  528. | |
  529. | |
  530. connect connect
  531. /------------\ /------------\
  532. | SUB | | SUB |
  533. +------------+ +------------+
  534. | | | |
  535. | Subscriber | | Subscriber |
  536. | | | |
  537. +------------+ +------------+
  538. Figure # - Forwarder proxy device
  539. [[/code]]
  540. Note that this application is multipart safe. It correctly detects multipart messages and sends them as it reads them. If we did not set the SNDMORE option on outgoing multipart data, the final recipient would get a corrupted message. You should always make your devices multipart safe so that there is no risk they will corrupt the data they switch.
  541. ++++ A Request-Reply Broker
  542. Let's explore how to solve a problem of scale by writing a little message queuing broker in 0MQ. We'll look at the request-reply pattern for this case.
  543. In the Hello World client-server application we have one client that talks to one service. However in real cases we usually need to allow multiple services as well as multiple clients. This lets us scale up the power of the service (many threads or processes or boxes rather than just one). The only constraint is that services must be stateless, all state being in the request or in some shared storage such as a database.
  544. There are two ways to connect multiple clients to multiple servers. The brute-force way is to connect each client socket to multiple service endpoints. One client socket can connect to multiple service sockets, and requests are load-balanced among these services. Let's say you connect a client socket to three service endpoints, A, B, and C. The client makes requests R1, R2, R3, R4. R1 and R4 go to service A, R2 goes to B, and R3 goes to service C.
  545. [[code type="textdiagram"]]
  546. +-----------+
  547. | |
  548. | Client |
  549. | |
  550. +-----------+
  551. | REQ |
  552. \-----+-----/
  553. |
  554. R1, R2, R3, R4
  555. |
  556. +-------------+-------------+
  557. | | |
  558. R1, R4 R2 R3
  559. | | |
  560. v v v
  561. /---------\ /---------\ /---------\
  562. | REP | | REP | | REP |
  563. +---------+ +---------+ +---------+
  564. | | | | | |
  565. | Service | | Service | | Service |
  566. | A | | B | | C |
  567. | | | | | |
  568. +---------+ +---------+ +---------+
  569. Figure # - Load balancing of requests
  570. [[/code]]
  571. This design lets you add more clients cheaply. You can also add more services. Each client will load-balance its requests to the services. But each client has to know the service topology. If you have 100 clients and then you decide to add three more services, you need to reconfigure and restart 100 clients in order for the clients to know about the three new services.
  572. That's clearly not the kind of thing we want to be doing at 3am when our supercomputing cluster has run out of resources and we desperately need to add a couple of hundred new service nodes. Too many stable pieces are like liquid concrete: knowledge is distributed and the more stable pieces you have, the more effort it is to change the topology. What we want is something sitting in between clients and services that centralizes all knowledge of the topology. Ideally, we should be able to add and remove services or clients at any time without touching any other part of the topology.
  573. So we'll write a little message queuing broker that gives us t…

Large files files are truncated, but you can click here to view the full file