Spin locks are the fundamental and most common synchronization primitives used in in drivers. This article discusses the DDI implementation; the following topics are covered:
See ``Spin locks (ODDI)'' for information about the ODDI spin locks.
Three types of spin locks are supported for DDI drivers:
Spin locks can be used to guard critical code regions that are relatively long if there is low contention for the resource protected by the lock. Spin locks must not be held when there is a possibility of a context switch.
Read/write spin locks are used when the lock is mostly acquired in the read-mode. These locks have a strong writer preference, so a spinning writer will acquire (have preference for) the lock before a spinning reader. This eliminates starvation for the contexts that want to acquire the lock in the write-mode. As the expected access pattern is read-mostly, starvation of contexts that want to acquire the lock in the read-mode is unlikely.
The manual pages for functions and structures used for spin locks are:
The locking mechanisms are generally used for multithreaded drivers. Non-multithreaded drivers use the spl(D3) functions instead. A non-multithreaded driver may use locks, although the only reasonable use is to access data structures that are shared with other cooperating drivers or daemons. Using a lock rather than an spl call for a data structure that is managed solely within a non-multithreaded driver is overkill.
DDI spin locks use the IPL and hierarchy attributes to guard against deadlocks and to ensure serial access to a resource.
Interrupt Priority Levels (IPLs) (see ``IPLs (Interrupt Priority Levels)'') play an important role in supporting locking primitives.
Assume a driver-defined lock is used to protect critical regions of code that can be acquired by one of the base contexts and the interrupt handler. One example is code that manipulates a device specific job list. As long as the base context and interrupt handler are executed on different processors, data structures are completely protected and serial access to the shared resource is ensured. If, however, the interrupt handler is invoked on the same processor where the currently executing base context is holding the lock in question, the system will become deadlocked. The interrupt context is busy-waiting for the spin lock, and the processor cannot schedule the base context, which alone can release the lock.
This leads to the definition of an important attribute of a DDI spin lock, the minimum IPL. This is specified as an argument to the function that sets the lock, from the list of values defined on the pl(D5) manual page. The minimum IPL is the greater of the following two IPL values:
For example, all the locks acquired by a disk driver have a minimum IPL of pldisk, which corresponds to the priority level at which the disk interrupt handler executes. Consider the case of a multiprocessor system where the processors are numbered in sequence 1 to n and both base and interrupt contexts of the disk driver acquire the same set of locks to protect critical regions of code. If a base context acquires a lock at its minimum IPL (pldisk in this example), on processor 1, the disk interrupt handler is effectively locked out and cannot be executed on the same processor. However, the handler can be executed on any other processor whose current IPL is less than pldisk, and since the interrupt handler will contend for the lock held by the base context, it will be forced to spin until it is granted exclusive access to the critical region.
The itimeout(D3) routine is considered a type of interrupt, and the minimum IPL should be high enough to block out the highest level callback routine that can contest for the critical regions.
All DDI functions that allocate or initialize spin locks allow the calling code to specify the minimum IPL to be associated with the lock. In addition, all the contexts trying to acquire a given lock should specify an IPL greater than or equal to the minimum IPL. The current implementation of spin locks raises the IPL to the value suggested by the calling context before trying to acquire the lock. If unsuccessful, the IPL is dropped to its old value and the contexts spins until the lock is released, at which time it starts the acquisition process over again.
Associated with every lock is a hierarchy value, defined on the hierarchy(D5) manual page, used to specify the sequence in which locks should be acquired.
The lock hierarchy is a tool that helps prevent deadlocks. Consider an example where Context A is holding a spin lock-X busy-waiting for a lock-Y held by a Context B, which in turn is waiting for lock-X; this is a classic deadlock condition. This type of system deadlock can be easily detected and solved by assigning hierarchy values to the various locks and establishing a rule that requires contexts to acquire locks in the order of increasing hierarchy and release locks in the order of decreasing hierarchy.
A side effect of maintaining this hierarchy is that some functions cannot be called while holding a spin lock. This restriction applies if the call could acquire another spin lock that is equal or lower in the lock hierarchy (an "out-of-order lock acquisition." For example, the putnext(D3str) function that calls the next put(D2str) routine on the queue has this restriction. If Driver A held a lock while put( ) were called for Driver B, a deadlock would occur if Driver B were to call putnext( ) for Driver A either recursively or on a separate thread. This information is provided in the "Context and synchronization" section of the man page for each function that has this restriction.
Every DDI function that allocates and initializes locks provides an option that allows the calling context to specify the hierarchy value. Again, if a context can acquire locks at multiple IPL values, the highest hierarchy at the lowest IPL should be less than the lowest hierarchy value at the higher IPL. The operating system does not usually check for hierarchy violations unless the driver is instrumented and compiled with the DEBUG option.
Consider a case where a single driver supports multiple devices, for example, a driver for network adapters in a system equipped with more than one adapter card (of the same type and make).
Typically, the driver would allocate and initialize a data structure for every device to contain all device-specific information, including configuration information and job queues. A driver should protect all access to the queues by using appropriate locks (spin locks in this example).
There are two possible solutions:
Likewise, you are strongly discouraged from embedding locks in global data structures. For binary compatibility reasons, the size of a lock will always be undefined, and drivers should embed lock pointers in device-specific data structures and explicitly initialize them by calling the appropriate lock allocation routines such as LOCK_ALLOC(D3) or RW_ALLOC(D3).
Spin locks are often held when looking up elements in lists. A spin lock on the whole list is feasible if there is low contention for the data structure. If there is contention because the critical region is dominated by the search time, then using a different algorithm (for example, hashing) to reduce the search time should also reduce the lock contention. This has been a common technique and the performance benefit is also seen on a uniprocessor system. If contention is still an issue one could also go to a finer grain lock by having a spin lock per hash bucket. The tradeoff is reduced contention versus locking protocol complexity and poorer single stream performance. Consider the cases where an element may be on multiple lists (for example, free, LRU, and so on). The extra complexity detracts from the maintainability of the code. The basic philosophy here is to keep it simple.
Unfortunately, not all search time dominated algorithms can be improved. Consider the case where the search key is a wild card. This may require every item on the list to be examined. A lock per hash bucket doesn't necessarily fit the need because the list contents may change in the buckets already examined. This might not be allowable depending on the usage semantics. A spin lock on the whole list may have too much contention. If data structures of this type are read-mostly and write-seldom, then the read/write spin locks should be used. Under these access patterns, the read/write spin locks can enhance concurrency. Multiple readers can all search the list simultaneously. The high contention case would be contained to the infrequent update/write usage. Note that read/write spin locks are not recursive.
If there are no accesses to the data protected by the lock from an interrupt routine (that is, the spin lock is only protecting concurrent base level accesses), chose the interrupt priority level (IPL) based on the contention for the resource as well as the IPL prior to acquiring the lock. For these locks there are essentially three different ways one can arrive at the minimum IPL as well as the IPL at which the lock should be acquired. As is to be expected, each scheme has its advantages and disadvantages. You are encouraged to consider all the possibilities and pick the scheme that best solves the problem on hand. The three different possibilities are:
The advantage of this scheme is that the IPL may not be unnecessarily manipulated.
The disadvantage of this scheme is that the lock may be acquired at different IPLs (including plbase) and this can result in a non-uniform prolonging of the critical section. Consider a lock that can be acquired by base-level contexts executing at different IPLs. In this case, the duration for which the lock is held will be dependent on the context holding the lock. If the context executing at the lowest IPL were to acquire the lock then there is a possibility that the context could be interrupted while holding the lock and this will adversely impact the duration of the critical section.
The advantage of this method is that the duration of the critical section is not affected by interrupt processing. The disadvantage of this scheme is that the IPL has to be manipulated and this could adversely affect the single stream performance (specially on architectures where manipulating IPLs is an expensive operation). Furthermore, since all entry points that acquire a spin lock will be aliased to the spl function when compiled under the uniprocessor compilation option, this scheme will result in sections of code being executed at plhi unnecessarily on the uniprocessor.
The advantage of this scheme is that the lock is always held at the same IPL. Note that this fact alone does not guarantee that the lock will be held for the same duration of time independent of the context holding it. This is because the system interrupt load may not be distributed uniformly across all the processors. The disadvantage of this scheme is that the code needs to change to accommodate new clients that may have entry IPLs higher than the IPL at which the lock is being presently acquired.
Given the possibilities, acquiring the lock at plhi should only be done only for highly contested locks that protect very short critical sections. As a general rule of thumb, associate higher IPLs with locks that have high contention. For locks that are not highly contested, either of the two alternatives -- acquiring the lock at either the entry IPL or the highest entry IPL -- may be used.