Is there a difference in sys CPU% between fork and close? Or this is just the application design problem?
The application continuously receives messages spawning a child (fork()) for each new message. The message is processed and after sending a confirmation it closes the socket. It uses a max number of sockets parameter to manage the influx.
From the code it looks like
1) it forks,
2) verifies the max number of sockets, and if current socket number is
3a) lower than max, it goes on processing; or if it is
3b) higher it closes max+1 socket.
The rate of processing is slower than the influx of new messages so after awhile it goes to 3b) mode very often.
Application works fine even with next to max number of sockets (4-12%CPU sys). Yet, as soon as it exceeds the max, the %CPU sys usage goes >= 90%.
Why?
The application continuously receives messages spawning a child (fork()) for each new message. The message is processed and after sending a confirmation it closes the socket. It uses a max number of sockets parameter to manage the influx.
From the code it looks like
1) it forks,
2) verifies the max number of sockets, and if current socket number is
3a) lower than max, it goes on processing; or if it is
3b) higher it closes max+1 socket.
The rate of processing is slower than the influx of new messages so after awhile it goes to 3b) mode very often.
Application works fine even with next to max number of sockets (4-12%CPU sys). Yet, as soon as it exceeds the max, the %CPU sys usage goes >= 90%.
Why?