Preemptive multi-tasking. We all kinda assume it's wonderful, but in the last 1/2 year or so, I've started wonder about what the tradeoffs are.
( Read more... )
The third strategy isn't all that different from setting up a big select() loop, which AFAIK provides pretty close to optimal throughput for I/O-bound stuff (if you're really, truly I/O-bound, a gatrillion threads on a bazillion CPUs isn't going to do a whole lot for you) -- modulo a little added complexity of having multiple threads, of course. The thing that makes it tricky is that you have to store all the algorithm state external to the thread before you hand control back, and I suspect that this is why people don't do it as much (in a web server, for example, the server process doesn't really know a whole lot about what kinds of state the various CGI/ASP/whatever scripts are holding, which is probably why preforking/thread pool implementations have been commonly used for everything except pure file serving).
Yes very true. The only problem with a select loop is its inability to take advantage of multiple cores.
As for keeping the state external, that, IMO, is more of an issue with language/library design. If the language supported continuations, it wouldn't be so bad...or if you implemented a cooperative threading model like what is described above, (M user : N Kernel w/o preemption with yields at IO callbacks, or at an explicit request), I think you'd keep a fairly simple programming paradigm.
Pth is a very portable POSIX/ANSI-C based library for Unix platforms which provides non-preemptive priority-based scheduling for multiple threads of execution (aka ``multithreading'') inside event-driven applications. ... The event facility allows threads to wait until various types of events occur, including pending I/O on filedescriptors, asynchronous signals, elapsed timers, pending I/O on message ports, thread and process termination, and even customized callback functions.
That looks pretty slick, but my initial read on the documentation is that it doesn't support concurrency? So you'd need to add an extra layer which would have several processor threads and then each of those would be running lots of threads in its own pth space.
Well, non-preemptive sorta implies no concurrency, doesn't it? "preemption" just means simulating concurrency by interrupting one another. I'd be surprised if they did provide concurrency, because in the multi-processor case, you're back to needing all the standard locking primitives, etc. to guarantee correctness, which is a lot of extra baggage. (In a preemptive system, you don't need locks as long as you never give up control during a critical section.)
I think we're in the territory of mincing words here, but from my perspective, non-preemptive means your thread of execution doesn't get suspended because something of "higher priority" is needing your CPU/execution resource. From that defnition, it does not make a statement about whether or another execution context is concurrently messing with the same state you are. Granted, in a single-core world, this distinction doesn't exist because, as you say, you're only simulating concurrency. However, in a world with real concurrency due to multiple execution resources, you can still have a semantic difference between preemptive and non-preemptive scheduling models.
Comments 13
Reply
As for keeping the state external, that, IMO, is more of an issue with language/library design. If the language supported continuations, it wouldn't be so bad...or if you implemented a cooperative threading model like what is described above, (M user : N Kernel w/o preemption with yields at IO callbacks, or at an explicit request), I think you'd keep a fairly simple programming paradigm.
Reply
Reply
Pth is a very portable POSIX/ANSI-C based library for Unix platforms which provides non-preemptive priority-based scheduling for multiple threads of execution (aka ``multithreading'') inside event-driven applications. ... The event facility allows threads to wait until various types of events occur, including pending I/O on filedescriptors, asynchronous signals, elapsed timers, pending I/O on message ports, thread and process termination, and even customized callback functions.
Reply
Reply
Reply
Reply
Leave a comment