Quote:
Originally posted by forumGuy
It is actually a Operating system course and all the major topics mentioned we have studied; on an academic level I understand it, however implementation is something different. In regards to deadlocks I do not know if this helps, however the Bankers algorithm may be a start (lots of overhead and predefined info is needed). Do not beat yourself up about the deadlock issue even the big boys (Oracle) occasionally end up in this state and most Operating systems including UNIX ignore it. Do you know the type of deadlock that is being encountered i.e. mutual exclusion, hold and wait, circular waiting or non preemptive (Studied for a final, it is fresh in the noggin)?
FI
Actually I myself am not encountering any problems with deadlocks. Although I understand them quite well, and am familar with the banker's algorithm. However, I'm lucky in that I don't need it because I am in complete control of my resources and can simply generate (and strickly follow) locking rules that guarentee I won't deadlock.
In my case, this is simple. In the case of an OS or DB, it's not so simple because you're not in complete control over what the end interface (process/client) will request or do with the resources. Funny enough that in these circumstances the Banker's Algorithm is generally of no help because most of the time the OS/DB has no knowledge of what resources are going to (or ever will be) requested by the processes/clients that it is scheduling. Therefore, this renders the Banker's Algorithm useless, as it requires all processes to notify the scheduler of all its resource requirements before entering a "critical section" and unless the OS/DB places strict requirements on its interface (which they'd prefer not to do) they can deadlock and there's nothing that can be done about it besides detect it and tell the application that it must correct it.
In *nix, the EDEADLK errno is set in response to the failed system call that generated the potential deadlock. The application then can either die, release resources and try again, or just spin trying infiniately and hoping that the condition is corrected (which is unlikely).
In Oracle, the deadlock is detected and broken by automatically "rolling back" the client having less work done at the time of detection. Afterwhich, the client can attempt to re-do all its work from the beginning again.
Neither of these are "faults" of the OS/DB, but just inherient issues due to the fact that they can not predict the resource usage of their clientelle (ironic in that Oracle's name suggests that it should be capable of doing so however
.)
Anyway, be greatful that your OS course is actually making you implement or play with these things in code. Theory means little without any hands on knowledge. In my opinion, too many schools teach theory without ever giving students practical work that makes use of it.