Changeset 16dff44
- Timestamp:
- Apr 8, 2023, 3:48:16 PM (20 months ago)
- Branches:
- ADT, ast-experimental, master
- Children:
- 3d5fba21
- Parents:
- 7f164c3
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
doc/theses/colby_parsons_MMAth/text/mutex_stmt.tex
r7f164c3 r16dff44 5 5 % ====================================================================== 6 6 7 The mutex statement is a concurrent language feature that aims to support easy lock usage. 8 The mutex statement is in the form of a clause and following statement, similar to a loop or conditional statement. 9 In the clause the mutex statement accepts a number of lockable objects, and then locks them for the duration of the following statement. 10 The locks are acquired in a deadlock free manner and released using \gls{raii}. 11 The mutex statement provides an avenue for easy lock usage in the common case where locks are used to wrap a critical section. 12 Additionally, it provides the safety guarantee of deadlock-freedom, both by acquiring the locks in a deadlock-free manner, and by ensuring that the locks release on error, or normal program execution via \gls{raii}. 13 14 \begin{cfa}[tabsize=3,caption={\CFA mutex statement usage},label={l:cfa_mutex_ex}] 7 The mutual exclusion problem was introduced by Dijkstra in 1965~\cite{Dijkstra65,Dijkstra65a}. 8 There are several concurrent processes or threads that communicate by shared variables and from time to time need exclusive access to shared resources. 9 A shared resource and code manipulating it form a pairing called a \Newterm{critical section (CS)}, which is a many-to-one relationship; 10 \eg if multiple files are being written to by multiple threads, only the pairings of simultaneous writes to the same files are CSs. 11 Regions of code where the thread is not interested in the resource are combined into the \Newterm{non-critical section (NCS)}. 12 13 Exclusive access to a resource is provided by \Newterm{mutual exclusion (MX)}. 14 MX is implemented by some form of \emph{lock}, where the CS is bracketed by lock procedures @acquire@ and @release@. 15 Threads execute a loop of the form: 16 \begin{cfa} 17 loop of $thread$ p: 18 NCS; 19 acquire( lock ); CS; release( lock ); // protected critical section with MX 20 end loop. 21 \end{cfa} 22 MX guarantees there is never more than one thread in the CS. 23 MX must also guarantee eventual progress: when there are competing threads attempting access, eventually some competing thread succeeds, \ie acquires the CS, releases it, and returns to the NCS. 24 % Lamport \cite[p.~329]{Lam86mx} extends this requirement to the exit protocol. 25 A stronger constraint is that every thread that calls @acquire@ eventually succeeds after some reasonable bounded time. 26 27 \section{Monitor} 28 \CFA provides a high-level locking object, called a \Newterm{monitor}, an elegant, efficient, high-level mechanisms for mutual exclusion and synchronization for shared-memory systems. 29 First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, several concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, \uC~\cite{Buhr92a} and Java~\cite{Java}. 30 In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to manually implement a monitor. 31 32 Figure~\ref{f:AtomicCounter} shows a \CFA and Java monitor implementing an atomic counter. 33 A \Newterm{monitor} is a programming technique that implicitly binds mutual exclusion to static function scope by call and return. 34 Lock mutual exclusion, defined by acquire/release calls, is independent of lexical context (analogous to block versus heap storage allocation). 35 Restricting acquire and release points in a monitor eases programming, comprehension, and maintenance, at a slight cost in flexibility and efficiency. 36 Ultimately, a monitor is implemented using a combination of basic locks and atomic instructions. 37 38 \begin{figure} 39 \centering 40 41 \begin{lrbox}{\myboxA} 42 \begin{cfa}[aboveskip=0pt,belowskip=0pt] 43 @monitor@ Aint { 44 int cnt; 45 }; 46 int ++?( Aint & @mutex@ m ) { return ++m.cnt; } 47 int ?=?( Aint & @mutex@ l, int r ) { l.cnt = r; } 48 int ?=?(int & l, Aint & r) { l = r.cnt; } 49 50 int i = 0, j = 0; 51 Aint x = { 0 }, y = { 0 }; $\C[1.5in]{// no mutex}$ 52 ++x; ++y; $\C{// mutex}$ 53 x = 2; y = i; $\C{// mutex}$ 54 i = x; j = y; $\C{// no mutex}\CRT$ 55 \end{cfa} 56 \end{lrbox} 57 58 \begin{lrbox}{\myboxB} 59 \begin{java}[aboveskip=0pt,belowskip=0pt] 60 class Aint { 61 private int cnt; 62 public Aint( int init ) { cnt = init; } 63 @synchronized@ public int inc() { return ++cnt; } 64 @synchronized@ public void set( int r ) {cnt = r;} 65 public int get() { return cnt; } 66 } 67 int i = 0, j = 0; 68 Aint x = new Aint( 0 ), y = new Aint( 0 ); 69 x.inc(); y.inc(); 70 x.set( 2 ); y.set( i ); 71 i = x.get(); j = y.get(); 72 \end{java} 73 \end{lrbox} 74 75 \subfloat[\CFA]{\label{f:AtomicCounterCFA}\usebox\myboxA} 76 \hspace*{3pt} 77 \vrule 78 \hspace*{3pt} 79 \subfloat[Java]{\label{f:AtomicCounterJava}\usebox\myboxB} 80 \caption{Atomic integer counter} 81 \label{f:AtomicCounter} 82 \end{figure} 83 84 Like Java, \CFA monitors have \Newterm{multi-acquire} semantics so the thread in the monitor may acquire it multiple times without deadlock, allowing recursion and calling other MX functions. 85 For robustness, \CFA monitors ensure the monitor lock is released regardless of how an acquiring function ends, normal or exceptional, and returning a shared variable is safe via copying before the lock is released. 86 Monitor objects can be passed through multiple helper functions without acquiring mutual exclusion, until a designated function associated with the object is called. 87 \CFA functions are designated MX by one or more pointer/reference parameters having qualifier @mutex@. 88 Java members are designated MX with \lstinline[language=java]{synchronized}, which applies only to the implicit receiver parameter. 89 In the example, the increment and setter operations need mutual exclusion, while the read-only getter operation is not MX because reading an integer is atomic. 90 91 As stated, the non-object-oriented nature of \CFA monitors allows a function to acquire multiple mutex objects. 92 For example, the bank-transfer problem requires locking two bank accounts to safely debit and credit money between accounts. 93 \begin{cfa} 94 monitor BankAccount { 95 int balance; 96 }; 97 void deposit( BankAccount & mutex b, int deposit ) with( b ) { 98 balance += deposit; 99 } 100 void transfer( BankAccount & mutex my, BankAccount & mutex your, int me2you ) { 101 deposit( my, -me2you ); $\C{// debit}$ 102 deposit( your, me2you ); $\C{// credit}$ 103 } 104 \end{cfa} 105 The \CFA monitor implementation ensures multi-lock acquisition is done in a deadlock-free manner regardless of the number of MX parameters and monitor arguments. 106 107 108 \section{\lstinline{mutex} statement} 109 Restricting implicit lock acquisition to function entry and exit can be awkward for certain problems. 110 To increase locking flexibility, some languages introduce a mutex statement. 111 \VRef[Figure]{f:ReadersWriter} shows the outline of a reader/writer lock written as a \CFA monitor and mutex statements. 112 (The exact lock implement is irrelevant.) 113 The @read@ and @write@ functions are called with a reader/write lock and any arguments to perform reading or writing. 114 The @read@ function is not MX because multiple readers can read simultaneously. 115 MX is acquired within @read@ by calling the (nested) helper functions @StartRead@ and @EndRead@ or executing the mutex statements. 116 Between the calls or statements, reads can execute simultaneous within the body of @read@. 117 The @write@ function does not require refactoring because writing is a CS. 118 The mutex-statement version is better because it has fewer names, less argument/parameter passing, and can possibly hold MX for a shorter duration. 119 120 \begin{figure} 121 \centering 122 123 \begin{lrbox}{\myboxA} 124 \begin{cfa}[aboveskip=0pt,belowskip=0pt] 125 monitor RWlock { ... }; 126 void read( RWlock & rw, ... ) { 127 void StartRead( RWlock & @mutex@ rw ) { ... } 128 void EndRead( RWlock & @mutex@ rw ) { ... } 129 StartRead( rw ); 130 ... // read without MX 131 EndRead( rw ); 132 } 133 void write( RWlock & @mutex@ rw, ... ) { 134 ... // write with MX 135 } 136 \end{cfa} 137 \end{lrbox} 138 139 \begin{lrbox}{\myboxB} 140 \begin{cfa}[aboveskip=0pt,belowskip=0pt] 141 142 void read( RWlock & rw, ... ) { 143 144 145 @mutex@( rw ) { ... } 146 ... // read without MX 147 @mutex@{ rw ) { ... } 148 } 149 void write( RWlock & @mutex@ rw, ... ) { 150 ... // write with MX 151 } 152 \end{cfa} 153 \end{lrbox} 154 155 \subfloat[monitor]{\label{f:RWmonitor}\usebox\myboxA} 156 \hspace*{3pt} 157 \vrule 158 \hspace*{3pt} 159 \subfloat[mutex statement]{\label{f:RWmutexstmt}\usebox\myboxB} 160 \caption{Readers writer problem} 161 \label{f:ReadersWriter} 162 \end{figure} 163 164 This work adds a mutex statement to \CFA, but generalizes it beyond implicit monitor locks. 165 In detail, the mutex statement has a clause and statement block, similar to a conditional or loop statement. 166 The clause accepts any number of lockable objects (like a \CFA MX function prototype), and locks them for the duration of the statement. 167 The locks are acquired in a deadlock free manner and released regardless of how control-flow exits the statement. 168 The mutex statement provides easy lock usage in the common case of lexically wrapping a CS. 169 Examples of \CFA mutex statement are shown in \VRef[Listing]{l:cfa_mutex_ex}. 170 171 \begin{cfa}[caption={\CFA mutex statement usage},label={l:cfa_mutex_ex}] 15 172 owner_lock lock1, lock2, lock3; 16 int count = 0; 17 mutex( lock1, lock2, lock3 ) { 18 // can use block statement 19 // ... 20 } 21 mutex( lock2, lock3 ) count++; // or inline statement 173 @mutex@( lock2, lock3 ) ...; $\C{// inline statement}$ 174 @mutex@( lock1, lock2, lock3 ) { ... } $\C{// statement block}$ 175 void transfer( BankAccount & my, BankAccount & your, int me2you ) { 176 ... // check values, no MX 177 @mutex@( my, your ) { // MX is shorter duration that function body 178 deposit( my, -me2you ); $\C{// debit}$ 179 deposit( your, me2you ); $\C{// credit}$ 180 } 181 } 22 182 \end{cfa} 23 183 24 184 \section{Other Languages} 25 There are similar concepts to the mutex statement that exist in other languages. 26 Java has a feature called a synchronized statement, which looks identical to \CFA's mutex statement, but it has some differences. 27 The synchronized statement only accepts a single object in its clause. 28 Any object can be passed to the synchronized statement in Java since all objects in Java are monitors, and the synchronized statement acquires that object's monitor. 29 In \CC there is a feature in the standard library \code{<mutex>} header called scoped\_lock, which is also similar to the mutex statement. 30 The scoped\_lock is a class that takes in any number of locks in its constructor, and acquires them in a deadlock-free manner. 31 It then releases them when the scoped\_lock object is deallocated, thus using \gls{raii}. 32 An example of \CC scoped\_lock usage is shown in Listing~\ref{l:cc_scoped_lock}. 33 34 \begin{cfa}[tabsize=3,caption={\CC scoped\_lock usage},label={l:cc_scoped_lock}] 35 std::mutex lock1, lock2, lock3; 36 { 37 scoped_lock s( lock1, lock2, lock3 ) 38 // locks are released via raii at end of scope 39 } 185 There are similar constructs to the mutex statement in other programming languages. 186 Java has a feature called a synchronized statement, which looks like the \CFA's mutex statement, but only accepts a single object in the clause and only handles monitor locks. 187 The \CC standard library has a @scoped_lock@, which is also similar to the mutex statement. 188 The @scoped_lock@ takes any number of locks in its constructor, and acquires them in a deadlock-free manner. 189 It then releases them when the @scoped_lock@ object is deallocated using \gls{raii}. 190 An example of \CC @scoped_lock@ is shown in \VRef[Listing]{l:cc_scoped_lock}. 191 192 \begin{cfa}[caption={\CC \lstinline{scoped_lock} usage},label={l:cc_scoped_lock}] 193 struct BankAccount { 194 @recursive_mutex m;@ $\C{// must be recursive}$ 195 int balance = 0; 196 }; 197 void deposit( BankAccount & b, int deposit ) { 198 @scoped_lock lock( b.m );@ $\C{// RAII acquire}$ 199 b.balance += deposit; 200 } $\C{// RAII release}$ 201 void transfer( BankAccount & my, BankAccount & your, int me2you ) { 202 @scoped_lock lock( my.m, your.m );@ $\C{// RAII acquire}$ 203 deposit( my, -me2you ); $\C{// debit}$ 204 deposit( your, me2you ); $\C{// credit}$ 205 } $\C{// RAII release}$ 40 206 \end{cfa} 41 207 42 208 \section{\CFA implementation} 43 The \CFA mutex statement takes some ideas from both the Java and \CC features. 44 The mutex statement can acquire more that one lock in a deadlock-free manner, and releases them via \gls{raii} like \CC, however the syntax is identical to the Java synchronized statement. 45 This syntactic choice was made so that the body of the mutex statement is its own scope. 46 Compared to the scoped\_lock, which relies on its enclosing scope, the mutex statement's introduced scope can provide visual clarity as to what code is being protected by the mutex statement, and where the mutual exclusion ends. 47 \CFA's mutex statement and \CC's scoped\_lock both use parametric polymorphism to allow user defined types to work with the feature. 48 \CFA's implementation requires types to support the routines \code{lock()} and \code{unlock()}, whereas \CC requires those routines, plus \code{try_lock()}. 49 The scoped\_lock requires an additional routine since it differs from the mutex statement in how it implements deadlock avoidance. 50 51 The parametric polymorphism allows for locking to be defined for types that may want convenient mutual exclusion. 52 An example of one such use case in \CFA is \code{sout}. 53 The output stream in \CFA is called \code{sout}, and functions similarly to \CC's \code{cout}. 54 \code{sout} has routines that satisfy the mutex statement trait, so the mutex statement can be used to lock the output stream while producing output. 55 In this case, the mutex statement allows the programmer to acquire mutual exclusion over an object without having to know the internals of the object or what locks need to be acquired. 56 The ability to do so provides both improves safety and programmer productivity since it abstracts away the concurrent details and provides an interface for optional thread-safety. 57 This is a commonly used feature when producing output from a concurrent context, since producing output is not thread safe by default. 58 This use case is shown in Listing~\ref{l:sout}. 59 60 \begin{cfa}[tabsize=3,caption={\CFA sout with mutex statement},label={l:sout}] 61 mutex( sout ) 62 sout | "This output is protected by mutual exclusion!"; 63 \end{cfa} 64 65 \section{Deadlock Avoidance} 66 The mutex statement uses the deadlock prevention technique of lock ordering, where the circular-wait condition of a deadlock cannot occur if all locks are acquired in the same order. 67 The scoped\_lock uses a deadlock avoidance algorithm where all locks after the first are acquired using \code{try_lock} and if any of the attempts to lock fails, all locks so far are released. 68 This repeats until all locks are acquired successfully. 69 The deadlock avoidance algorithm used by scoped\_lock is shown in Listing~\ref{l:cc_deadlock_avoid}. 70 The algorithm presented is taken directly from the source code of the \code{<mutex>} header, with some renaming and comments for clarity. 71 72 \begin{cfa}[caption={\CC scoped\_lock deadlock avoidance algorithm},label={l:cc_deadlock_avoid}] 209 The \CFA mutex statement takes some ideas from both the Java and \CC features. 210 Like Java, \CFA introduces a new statement rather than building from existing language features. 211 (\CFA has sufficient language features to mimic \CC RAII locking.) 212 This syntactic choice makes MX explicit rather than implicit via object declarations. 213 Hence, it is easier for programmers and language tools to identify MX points in a program, \eg scan for all @mutex@ parameters and statements in a body of code. 214 Furthermore, concurrent safety is provided across an entire program for the complex operation of acquiring multiple locks in a deadlock-free manner. 215 Unlike Java, \CFA's mutex statement and \CC's @scoped_lock@ both use parametric polymorphism to allow user defined types to work with this feature. 216 In this case, the polymorphism allows a locking mechanism to acquire MX over an object without having to know the object internals or what kind of lock it is using. 217 \CFA's provides and uses this locking trait: 218 \begin{cfa} 219 forall( L & | sized(L) ) 220 trait is_lock { 221 void lock( L & ); 222 void unlock( L & ); 223 }; 224 \end{cfa} 225 \CC @scoped_lock@ has this trait implicitly based on functions accessed in a template. 226 @scoped_lock@ also requires @try_lock@ because of its technique for deadlock avoidance \see{\VRef{s:DeadlockAvoidance}}. 227 228 The following shows how the @mutex@ statement is used with \CFA streams to eliminate unpredictable results when printing in a concurrent program. 229 For example, if two threads execute: 230 \begin{cfa} 231 thread$\(_1\)$ : sout | "abc" | "def"; 232 thread$\(_2\)$ : sout | "uvw" | "xyz"; 233 \end{cfa} 234 any of the outputs can appear, included a segment fault due to I/O buffer corruption: 235 \begin{cquote} 236 \small\tt 237 \begin{tabular}{@{}l|l|l|l|l@{}} 238 abc def & abc uvw xyz & uvw abc xyz def & abuvwc dexf & uvw abc def \\ 239 uvw xyz & def & & yz & xyz 240 \end{tabular} 241 \end{cquote} 242 The stream type for @sout@ is defined to satisfy the @is_lock@ trait, so the @mutex@ statement can be used to lock an output stream while producing output. 243 From the programmer's perspective, it is sufficient to know an object can be locked and then any necessary MX is easily available via the @mutex@ statement. 244 This ability improves safety and programmer productivity since it abstracts away the concurrent details. 245 Hence, a programmer can easily protect cascaded I/O expressions: 246 \begin{cfa} 247 thread$\(_1\)$ : mutex( sout ) sout | "abc" | "def"; 248 thread$\(_2\)$ : mutex( sout ) sout | "uvw" | "xyz"; 249 \end{cfa} 250 constraining the output to two different lines in either order: 251 \begin{cquote} 252 \small\tt 253 \begin{tabular}{@{}l|l@{}} 254 abc def & uvw xyz \\ 255 uvw xyz & abc def 256 \end{tabular} 257 \end{cquote} 258 where this level of safe nondeterministic output is acceptable. 259 Alternatively, multiple I/O statements can be protected using the mutex statement block: 260 \begin{cfa} 261 mutex( sout ) { // acquire stream lock for sout for block duration 262 sout | "abc"; 263 mutex( sout ) sout | "uvw" | "xyz"; // OK because sout lock is recursive 264 sout | "def"; 265 } // implicitly release sout lock 266 \end{cfa} 267 The inner lock acquire is likely to occur through a function call that does a thread-safe print. 268 269 \section{Deadlock Avoidance}\label{s:DeadlockAvoidance} 270 The mutex statement uses the deadlock avoidance technique of lock ordering, where the circular-wait condition of a deadlock cannot occur if all locks are acquired in the same order. 271 The @scoped_lock@ uses a deadlock avoidance algorithm where all locks after the first are acquired using @try_lock@ and if any of the lock attempts fail, all acquired locks are released. 272 This repeats after selecting a new starting point in a cyclic manner until all locks are acquired successfully. 273 This deadlock avoidance algorithm is shown in Listing~\ref{l:cc_deadlock_avoid}. 274 The algorithm is taken directly from the source code of the @<mutex>@ header, with some renaming and comments for clarity. 275 276 \begin{cfa}[caption={\CC \lstinline{scoped_lock} deadlock avoidance algorithm},label={l:cc_deadlock_avoid}] 73 277 int first = 0; // first lock to attempt to lock 74 278 do { 75 76 locks[first].lock(); // lock first lock 77 for (int i = 1; i < Num_Locks; ++i) { // iterate over rest of locks 78 79 if (!locks[idx].try_lock()) { // try lock each one 80 for (int j = i; j != 0; --j) // release all locks 81 82 first = idx; // rotate which lock to acquire first 83 84 85 279 // locks is the array of locks to acquire 280 locks[first].lock(); $\C{// lock first lock}$ 281 for ( int i = 1; i < Num_Locks; i += 1 ) { $\C{// iterate over rest of locks}$ 282 const int idx = (first + i) % Num_Locks; 283 if ( ! locks[idx].try_lock() ) { $\C{// try lock each one}$ 284 for ( int j = i; j != 0; j -= 1 ) $\C{// release all locks}$ 285 locks[(first + j - 1) % Num_Locks].unlock(); 286 first = idx; $\C{// rotate which lock to acquire first}$ 287 break; 288 } 289 } 86 290 // if first lock is still held then all have been acquired 87 } while (!locks[first].owns_lock()); // is first lock held? 88 \end{cfa} 89 90 The algorithm in \ref{l:cc_deadlock_avoid} successfully avoids deadlock, however there is a potential livelock scenario. 91 Given two threads $A$ and $B$, who create a scoped\_lock with two locks $L1$ and $L2$, a livelock can form as follows. 92 Thread $A$ creates a scoped\_lock with $L1$, $L2$, and $B$ creates a scoped lock with the order $L2$, $L1$. 93 Both threads acquire the first lock in their order and then fail the try\_lock since the other lock is held. 94 They then reset their start lock to be their 2nd lock and try again. 95 This time $A$ has order $L2$, $L1$, and $B$ has order $L1$, $L2$. 96 This is identical to the starting setup, but with the ordering swapped among threads. 97 As such, if they each acquire their first lock before the other acquires their second, they can livelock indefinitely. 98 99 The lock ordering algorithm used in the mutex statement in \CFA is both deadlock and livelock free. 100 It sorts the locks based on memory address and then acquires them. 101 For locks fewer than 7, it sorts using hard coded sorting methods that perform the minimum number of swaps for a given number of locks. 102 For 7 or more locks insertion sort is used. 103 These sorting algorithms were chosen since it is rare to have to hold more than a handful of locks at a time. 104 It is worth mentioning that the downside to the sorting approach is that it is not fully compatible with usages of the same locks outside the mutex statement. 105 If more than one lock is held by a mutex statement, if more than one lock is to be held elsewhere, it must be acquired via the mutex statement, or else the required ordering will not occur. 106 Comparitively, if the scoped\_lock is used and the same locks are acquired elsewhere, there is no concern of the scoped\_lock deadlocking, due to its avoidance scheme, but it may livelock. 291 } while ( ! locks[first].owns_lock() ); $\C{// is first lock held?}$ 292 \end{cfa} 293 294 While the algorithm in \ref{l:cc_deadlock_avoid} successfully avoids deadlock, there is a livelock scenario. 295 Assume two threads, $A$ and $B$, create a @scoped_lock@ accessing two locks, $L1$ and $L2$. 296 A livelock can form as follows. 297 Thread $A$ creates a @scoped_lock@ with arguments $L1$, $L2$, and $B$ creates a scoped lock with the lock arguments in the opposite order $L2$, $L1$. 298 Both threads acquire the first lock in their order and then fail the @try_lock@ since the other lock is held. 299 Both threads then reset their starting lock to be their second lock and try again. 300 This time $A$ has order $L2$, $L1$, and $B$ has order $L1$, $L2$, which is identical to the starting setup but with the ordering swapped between threads. 301 If the threads perform this action in lock-step, they cycle indefinitely without entering the CS, \ie livelock. 302 Hence, to use @scoped_lock@ safely, a programmer must manually construct and maintain a global ordering of lock arguments passed to @scoped_lock@. 303 304 The lock ordering algorithm used in \CFA mutex functions and statements is deadlock and livelock free. 305 The algorithm uses the lock memory addresses as keys, sorts the keys, and then acquires the locks in sorted order. 306 For fewer than 7 locks ($2^3-1$), the sort is unrolled performing the minimum number of compare and swaps for the given number of locks; 307 for 7 or more locks, insertion sort is used. 308 Since it is extremely rare to hold more than 6 locks at a time, the algorithm is fast and executes in $O(1)$ time. 309 Furthermore, lock addresses are unique across program execution, even for dynamically allocated locks, so the algorithm is safe across the entire program execution. 310 311 The downside to the sorting approach is that it is not fully compatible with manual usages of the same locks outside the @mutex@ statement, \ie the lock are acquired without using the @mutex@ statement. 312 The following scenario is a classic deadlock. 313 \begin{cquote} 314 \begin{tabular}{@{}l@{\hspace{30pt}}l@{}} 315 \begin{cfa} 316 lock L1, L2; // assume &L1 < &L2 317 $\textbf{thread\(_1\)}$ 318 acquire( L2 ); 319 acquire( L1 ); 320 CS 321 release( L1 ); 322 release( L2 ); 323 \end{cfa} 324 & 325 \begin{cfa} 326 327 $\textbf{thread\(_2\)}$ 328 mutex( L1, L2 ) { 329 330 CS 331 332 } 333 \end{cfa} 334 \end{tabular} 335 \end{cquote} 336 Comparatively, if the @scoped_lock@ is used and the same locks are acquired elsewhere, there is no concern of the @scoped_lock@ deadlocking, due to its avoidance scheme, but it may livelock. 337 The convenience and safety of the @mutex@ statement, \eg guaranteed lock release with exceptions, should encourage programmers to always use it for locking, mitigating any deadlock scenario. 338 339 \section{Performance} 340 Given the two multi-acquisition algorithms in \CC and \CFA, each with differing advantages and disadvantages, it interesting to compare their performance. 341 Comparison with Java is not possible, since it only takes a single lock. 342 343 The comparison starts with a baseline that acquires the locks directly without a mutex statement or @scoped_lock@ in a fixed ordering and then releases them. 344 The baseline helps highlight the cost of the deadlock avoidance/prevention algorithms for each implementation. 345 346 The benchmark used to evaluate the avoidance algorithms repeatedly acquires a fixed number of locks in a random order and then releases them. 347 The pseudo code for the deadlock avoidance benchmark is shown in \VRef[Listing]{l:deadlock_avoid_pseudo}. 348 To ensure the comparison exercises the implementation of each lock avoidance algorithm, an identical spinlock is implemented in each language using a set of builtin atomics available in both \CC and \CFA. 349 The benchmarks are run for a fixed duration of 10 seconds and then terminate. 350 The total number of times the group of locks is acquired is returned for each thread. 351 Each variation is run 11 times on 2, 4, 8, 16, 24, 32 cores and with 2, 4, and 8 locks being acquired. 352 The median is calculated and is plotted alongside the 95\% confidence intervals for each point. 353 354 \begin{cfa}[caption={Deadlock avoidance bendchmark pseudo code},label={l:deadlock_avoid_pseudo}] 355 356 357 358 $\PAB{// add pseudo code}$ 359 360 361 362 \end{cfa} 363 364 The performance experiments were run on the following multi-core hardware systems to determine differences across platforms: 365 \begin{list}{\arabic{enumi}.}{\usecounter{enumi}\topsep=5pt\parsep=5pt\itemsep=0pt} 366 % sudo dmidecode -t system 367 \item 368 Supermicro AS--1123US--TR4 AMD EPYC 7662 64--core socket, hyper-threading $\times$ 2 sockets (256 processing units) 2.0 GHz, TSO memory model, running Linux v5.8.0--55--generic, gcc--10 compiler 369 \item 370 Supermicro SYS--6029U--TR4 Intel Xeon Gold 5220R 24--core socket, hyper-threading $\times$ 2 sockets (48 processing units) 2.2GHz, TSO memory model, running Linux v5.8.0--59--generic, gcc--10 compiler 371 \end{list} 372 %The hardware architectures are different in threading (multithreading vs hyper), cache structure (MESI or MESIF), NUMA layout (QPI vs HyperTransport), memory model (TSO vs WO), and energy/thermal mechanisms (turbo-boost). 373 %Software that runs well on one architecture may run poorly or not at all on another. 374 375 Figure~\ref{f:mutex_bench} shows the results of the benchmark experiments. 376 \PAB{Make the points in the graphs for each line different. 377 Also, make the text in the graphs larger.} 378 The baseline results for both languages are mostly comparable, except for the 8 locks results in \ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel}, where the \CFA baseline is slightly slower. 379 The avoidance result for both languages is significantly different, where \CFA's mutex statement achieves throughput that is magnitudes higher than \CC's @scoped_lock@. 380 The slowdown for @scoped_lock@ is likely due to its deadlock-avoidance implementation. 381 Since it uses a retry based mechanism, it can take a long time for threads to progress. 382 Additionally the potential for livelock in the algorithm can result in very little throughput under high contention. 383 For example, on the AMD machine with 32 threads and 8 locks, the benchmarks would occasionally livelock indefinitely, with no threads making any progress for 3 hours before the experiment was terminated manually. 384 It is likely that shorter bouts of livelock occurred in many of the experiments, which would explain large confidence intervals for some of the data points in the \CC data. 385 In Figures~\ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel} the mutex statement performs better than the baseline. 386 At 7 locks and above the mutex statement switches from a hard coded sort to insertion sort. 387 It is likely that the improvement in throughput compared to baseline is due to the time spent in the insertion sort, which decreases contention on the locks. 107 388 108 389 \begin{figure} 109 \centering 110 \begin{subfigure}{0.5\textwidth} 111 \centering 112 \scalebox{0.5}{\input{figures/nasus_Aggregate_Lock_2.pgf}} 113 \subcaption{AMD} 114 \end{subfigure}\hfill 115 \begin{subfigure}{0.5\textwidth} 116 \centering 117 \scalebox{0.5}{\input{figures/pyke_Aggregate_Lock_2.pgf}} 118 \subcaption{Intel} 119 \end{subfigure} 120 121 \begin{subfigure}{0.5\textwidth} 122 \centering 123 \scalebox{0.5}{\input{figures/nasus_Aggregate_Lock_4.pgf}} 124 \subcaption{AMD} 125 \end{subfigure}\hfill 126 \begin{subfigure}{0.5\textwidth} 127 \centering 128 \scalebox{0.5}{\input{figures/pyke_Aggregate_Lock_4.pgf}} 129 \subcaption{Intel} 130 \end{subfigure} 131 132 \begin{subfigure}{0.5\textwidth} 133 \centering 134 \scalebox{0.5}{\input{figures/nasus_Aggregate_Lock_8.pgf}} 135 \subcaption{AMD}\label{f:mutex_bench8_AMD} 136 \end{subfigure}\hfill 137 \begin{subfigure}{0.5\textwidth} 138 \centering 139 \scalebox{0.5}{\input{figures/pyke_Aggregate_Lock_8.pgf}} 140 \subcaption{Intel}\label{f:mutex_bench8_Intel} 141 \end{subfigure} 142 \caption{The aggregate lock benchmark comparing \CC scoped\_lock and \CFA mutex statement throughput (higher is better).} 143 \label{f:mutex_bench} 390 \centering 391 \subfloat[AMD]{ 392 \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Aggregate_Lock_2.pgf}} 393 } 394 \subfloat[Intel]{ 395 \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Aggregate_Lock_2.pgf}} 396 } 397 398 \subfloat[AMD]{ 399 \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Aggregate_Lock_4.pgf}} 400 } 401 \subfloat[Intel]{ 402 \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Aggregate_Lock_4.pgf}} 403 } 404 405 \subfloat[AMD]{ 406 \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Aggregate_Lock_8.pgf}} 407 \label{f:mutex_bench8_AMD} 408 } 409 \subfloat[Intel]{ 410 \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Aggregate_Lock_8.pgf}} 411 \label{f:mutex_bench8_Intel} 412 } 413 \caption{The aggregate lock benchmark comparing \CC \lstinline{scoped_lock} and \CFA mutex statement throughput (higher is better).} 414 \label{f:mutex_bench} 144 415 \end{figure} 145 416 146 \section{Performance} 147 Performance is compared between \CC's scoped\_lock and \CFA's mutex statement. 148 Comparison with Java is omitted, since it only takes a single lock. 149 To ensure that the comparison between \CC and \CFA exercises the implementation of each feature, an identical spinlock is implemented in each language using a set of builtin atomics available in both \CFA and \CC. 150 Each feature is evaluated on a benchmark which acquires a fixed number of locks in a random order and then releases them. 151 A baseline is included that acquires the locks directly without a mutex statement or scoped\_lock in a fixed ordering and then releases them. 152 The baseline helps highlight the cost of the deadlock avoidance/prevention algorithms for each implementation. 153 The benchmarks are run for a fixed duration of 10 seconds and then terminate and return the total number of times the group of locks were acquired. 154 Each variation is run 11 times on a variety up to 32 cores and with 2, 4, and 8 locks being acquired. 155 The median is calculated and is plotted alongside the 95\% confidence intervals for each point. 156 157 Figure~\ref{f:mutex_bench} shows the results of the benchmark. 158 The baseline runs for both languages are mostly comparable, except for the 8 locks results in \ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel}, where the \CFA baseline is slower. 159 \CFA's mutex statement achieves throughput that is magnitudes higher than \CC's scoped\_lock. 160 This is likely due to the scoped\_lock deadlock avoidance implementation. 161 Since it uses a retry based mechanism, it can take a long time for threads to progress. 162 Additionally the potential for livelock in the algorithm can result in very little throughput under high contention. 163 It was observed on the AMD machine that with 32 threads and 8 locks the benchmarks would occasionally livelock indefinitely, with no threads making any progress for 3 hours before the experiment was terminated manually. 164 It is likely that shorter bouts of livelock occured in many of the experiments, which would explain large confidence intervals for some of the data points in the \CC data. 165 In Figures~\ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel} the mutex statement performs better than the baseline. 166 At 7 locks and above the mutex statement switches from a hard coded sort to insertion sort. 167 It is likely that the improvement in throughput compared to baseline is due to the time spent in the insertion sort, which decreases contention on the locks. 417 % Local Variables: % 418 % tab-width: 4 % 419 % End: %
Note: See TracChangeset
for help on using the changeset viewer.