Single-threaded servers are essentially event-driven - they execute in response to a time-out or an I/O event. They typically don't spend much CPU time for a given request, because they need to get back to select to service other events that might have queued up in the meantime. Most production single-threaded servers also use nonblocking filehandles (combining the second and third options listed in the section "Handling Multiple Clients "). In the next chapter, we will build a small message-passing library using these techniques. The advantage of using single-threading is that frequent short-cycle requests are handled with very little overhead. In addition, data structures can easily be shared between all parallel conversations or cached for future conversations. A chat server, for example, benefits most from such an architecture.
The multiprocess solution is chosen when the server cannot guarantee how long a given request is going to take. Web servers follow this approach and simply spawn a CGI (Common Gateway Interface) program to handle the conversation with the corresponding web browser on the other end. Nowadays, the trend is to handle quick tasks in the web server itself and spawn programs only when the task might hold up the entire server. Of course, the problem is that spawning processes is expensive, so a popular option is to prespawn a fixed number of processes and hand the task to them whenever a request comes in. Clearly, if there are many more sockets than there are prespawned processes, the parent has no option but to use select to multiplex between them. As you can see, the options described in the previous section are by no means independent of each other.
Multithreading is an option if the environment supports it (Perl doesn't yet). Java is enthusiastic about this approach and expects a thread to block on I/O calls; in fact, it doesn't even provide an interface to select . The advantage of this approach is that it is much more lightweight in comparison to the multiprocess version. In addition, you get parallelism and data sharing. The disadvantage is that typical workstations tend to perform badly if you introduce, say, 40 or more kernel level threads, so they can support only a limited number of concurrent clients. Threads on Solaris are better off, because they make a distinction between lightweight, user-level threads and kernel threads. In any case, this is not an option currently available to a Perl programmer, so the discussion is moot.
Copyright © 2001 O'Reilly & Associates. All rights reserved.