Hi,
I have now proceeded to create non blocking IO channels but can't get your select() implementation to work. The problem is that it doesn't timeout as it should and therefore act as a blocking IO.
int selectOnRead(int sockfd, unsigned long secTimeout, unsigned long uTimeout) { fd_set readFdSet; struct timeval tv; tv.tv_sec = secTimeout; tv.tv_usec = uTimeout; FD_ZERO(&readFdSet); FD_SET(sockfd, &readFdSet); select(FD_SETSIZE, &readFdSet, NULL, NULL, &tv) }
The code above is as simple as it gets by using select; only one file descriptor that should timeout on the values set in "tv". If we go through the arguments to select they should be:
- FD_SETSIZE = Number of FDs + 1 (Or FD_SETSIZE)
- readFdSet = Array of FDs to observe when they can be read without blocking
- tv = A simple time structure that can hold seconds and microseconds
(If tv is NULL select will block, if both time holders are 0 select return immediately, else it will use the values for timeout - http://www.gnu.org/software/libc/manual/html_mono/libc.html#Waiting-for-I_002fO)
I have tested the following:
1. If both time holders in tv is set to 0 VMCI-socket still blocks.
2. It doesn't matter if datagram or stream is used.
3. If all three observable types is NULL select will not block (or timeout) but instead return error(-1).
No matter what I put into the arguments it will block until a client sends it a packet. Is this a known bug or are my implementation wrong?
For a more complete example see the server implementation in the attachment.
Kind Regards
Andreas
The timeout value is currently ignored on windows (tracked by bug 374790). I believe this should work for Linux though.
The timeout value is currently ignored on windows (tracked by bug 374790). I believe this should work for Linux though.
Unfortunately I'm in a Windows environment which renders this function (and VMCI) more or less worthless at the moment.
1. Do you know of a workaround?
2. When will the bug be fixed?
3. How can I access your bugzilla to view information about current bugs?
At the moment I can't use VMCI with 6.5 and have to downgrade again to 6.0 to make it work. It would be great if you could answer my second question so I know which timeframe I'm working with.
Kind Regards
Andreas
1. Do you know of a workaround?
I guess it depends what you are trying to use select for. If you wanted to call recv for example ... you could
do a setsockopt with SO_RCVTIMEO of 0 on your socket. Then your recv calls will return immediately. This
is obviously per socket and so you would have to all the ones you care about. Hope that helps.
2. When will the bug be fixed?
VMware policy is to not comment on future releases ... so unfortunately I can't say much here.
3. How can I access your bugzilla to view information about current bugs?
Bugzilla is internal only ... so you can't view the bug. I gave you the bug number so you would have something to track the issue by when talking in the forum.
>> 1. Do you know of a workaround?
I guess it depends what you are trying to use select for. If you wanted to call recv for example ... you could
do a setsockopt with SO_RCVTIMEO of 0 on your socket. Then your recv calls will return immediately. This
is obviously per socket and so you would have to all the ones you care about. Hope that helps.
I don't think that is supported on datagram sockets, at least I can't get it to work (Not sure if it works on stream connections either, it's not documented in the VMCI-socket API).
>> 2. When will the bug be fixed?
VMware policy is to not comment on future releases ... so unfortunately I can't say much here.
Ok, I'll go back to 6.0 until it's fixed.
Thanks for the help.
Kind Regards
Andreas
> I don't think that is
supported on datagram sockets, at least I can't get it to work (Not
sure if it works on stream connections either, it's not documented in
the VMCI-socket API).
It should work. Internally we call KeWaitForSingleObject do the wait on windows which says the following -- "If *Timeout = 0, the routine returns without waiting." I have not
tried this myself though.
How are you setting the socket option? Can you call gesockopt after the set to ensure that it is in fact set to a value of 0?
ret = setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO,
(void *)&setTimeo, sizeof setTimeo);
if (ret == -1) {
goto close;
}
getTimeoLen = sizeof getTimeo;
ret = getsockopt(sock, SOL_SOCKET, SO_RCVTIMEO,
(void *)&getTimeo, &getTimeoLen);
Ok ... we actually force the timeout to not be zero when we wait ... so you are right ... this won't work.
You are right about that you are overriding the timeout parameter with a low default timeout value (or no timeout) in your code when it's not zero. But the socket spec states that a value of 0 means blocking and otherwise it's in non-blocking mode. That means that if I provide a timeout of any value other than zero I'll get a non-blocking socket (I have to handle the timeout on my side instead which is fine). The reason I didn't get it to work in my previous test was that I had mixed up native socket code with VMCI and tried to use afVMCI instead of SOL_SOCKET.
Thanks for the help - I can now continue my work on 6.5.
Kind Regards
Andreas