writev() vs send() when using signals


I ran into an interesting problem today.  I have a C application with functions calling writev() to send data across the network.  The messages being sent range from very small – a few bytes – to over 1MB.  These functions have worked great up to this point.

Today I integrated a number of high resolution timers.  When the timers go off they raise a signal which is handled by a signal handler.  It turns out that writev() can be interrupted by these signals.  Unfortunately, it doesn't seem to recover correctly from that interruption.  I haven't figured out why, but I did find out that send() does recover correctly.

Since I don't need the feature provided by the iovec structure writev() uses there was no need to keep it.  So I switched to using send() in these functions, along with wrapping the call inside a loop to guarantee the full message was sent.  See, the signal causes the send() to return early, but it returns the number of bytes it actually wrote.  So you can catch that it didn't send everything and just try resending whatever is left. 

This works great and is not affected by the frequent signals.  This was interesting only because I didn't see anything in the man pages warning me about the problems with writev() and signals.  But at least now I know what to do if I hit this problem again.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.