1 | % ====================================================================== |
---|
2 | % ====================================================================== |
---|
3 | \chapter{Mutex Statement}\label{s:mutexstmt} |
---|
4 | % ====================================================================== |
---|
5 | % ====================================================================== |
---|
6 | |
---|
7 | The mutual exclusion problem was introduced by Dijkstra in 1965~\cite{Dijkstra65,Dijkstra65a}. |
---|
8 | There are several concurrent processes or threads that communicate by shared variables and from time to time need exclusive access to shared resources. |
---|
9 | A shared resource and code manipulating it form a pairing called a \Newterm{critical section (CS)}, which is a many-to-one relationship; |
---|
10 | \eg if multiple files are being written to by multiple threads, only the pairings of simultaneous writes to the same files are CSs. |
---|
11 | Regions of code where the thread is not interested in the resource are combined into the \Newterm{non-critical section (NCS)}. |
---|
12 | |
---|
13 | Exclusive access to a resource is provided by \Newterm{mutual exclusion (MX)}. |
---|
14 | MX is implemented by some form of \emph{lock}, where the CS is bracketed by lock procedures @acquire@ and @release@. |
---|
15 | Threads execute a loop of the form: |
---|
16 | \begin{cfa} |
---|
17 | loop of $thread$ p: |
---|
18 | NCS; |
---|
19 | acquire( lock ); CS; release( lock ); // protected critical section with MX |
---|
20 | end loop. |
---|
21 | \end{cfa} |
---|
22 | MX guarantees there is never more than one thread in the CS. |
---|
23 | MX must also guarantee eventual progress: when there are competing threads attempting access, eventually some competing thread succeeds, \ie acquires the CS, releases it, and returns to the NCS. |
---|
24 | % Lamport \cite[p.~329]{Lam86mx} extends this requirement to the exit protocol. |
---|
25 | A stronger constraint is that every thread that calls @acquire@ eventually succeeds after some reasonable bounded time. |
---|
26 | |
---|
27 | \section{Monitor} |
---|
28 | \CFA provides a high-level locking object, called a \Newterm{monitor}, an elegant, efficient, high-level mechanisms for mutual exclusion and synchronization for shared-memory systems. |
---|
29 | First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, several concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, \uC~\cite{Buhr92a} and Java~\cite{Java}. |
---|
30 | In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to manually implement a monitor. |
---|
31 | |
---|
32 | Figure~\ref{f:AtomicCounter} shows a \CFA and Java monitor implementing an atomic counter. |
---|
33 | A \Newterm{monitor} is a programming technique that implicitly binds mutual exclusion to static function scope by call and return. |
---|
34 | Lock mutual exclusion, defined by acquire/release calls, is independent of lexical context (analogous to block versus heap storage allocation). |
---|
35 | Restricting acquire and release points in a monitor eases programming, comprehension, and maintenance, at a slight cost in flexibility and efficiency. |
---|
36 | Ultimately, a monitor is implemented using a combination of basic locks and atomic instructions. |
---|
37 | |
---|
38 | \begin{figure} |
---|
39 | \centering |
---|
40 | |
---|
41 | \begin{lrbox}{\myboxA} |
---|
42 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
43 | @monitor@ Aint { |
---|
44 | int cnt; |
---|
45 | }; |
---|
46 | int ++?( Aint & @mutex@ m ) { return ++m.cnt; } |
---|
47 | int ?=?( Aint & @mutex@ l, int r ) { l.cnt = r; } |
---|
48 | int ?=?(int & l, Aint & r) { l = r.cnt; } |
---|
49 | |
---|
50 | int i = 0, j = 0; |
---|
51 | Aint x = { 0 }, y = { 0 }; $\C[1.5in]{// no mutex}$ |
---|
52 | ++x; ++y; $\C{// mutex}$ |
---|
53 | x = 2; y = i; $\C{// mutex}$ |
---|
54 | i = x; j = y; $\C{// no mutex}\CRT$ |
---|
55 | \end{cfa} |
---|
56 | \end{lrbox} |
---|
57 | |
---|
58 | \begin{lrbox}{\myboxB} |
---|
59 | \begin{java}[aboveskip=0pt,belowskip=0pt] |
---|
60 | class Aint { |
---|
61 | private int cnt; |
---|
62 | public Aint( int init ) { cnt = init; } |
---|
63 | @synchronized@ public int inc() { return ++cnt; } |
---|
64 | @synchronized@ public void set( int r ) {cnt = r;} |
---|
65 | public int get() { return cnt; } |
---|
66 | } |
---|
67 | int i = 0, j = 0; |
---|
68 | Aint x = new Aint( 0 ), y = new Aint( 0 ); |
---|
69 | x.inc(); y.inc(); |
---|
70 | x.set( 2 ); y.set( i ); |
---|
71 | i = x.get(); j = y.get(); |
---|
72 | \end{java} |
---|
73 | \end{lrbox} |
---|
74 | |
---|
75 | \subfloat[\CFA]{\label{f:AtomicCounterCFA}\usebox\myboxA} |
---|
76 | \hspace*{3pt} |
---|
77 | \vrule |
---|
78 | \hspace*{3pt} |
---|
79 | \subfloat[Java]{\label{f:AtomicCounterJava}\usebox\myboxB} |
---|
80 | \caption{Atomic integer counter} |
---|
81 | \label{f:AtomicCounter} |
---|
82 | \end{figure} |
---|
83 | |
---|
84 | Like Java, \CFA monitors have \Newterm{multi-acquire} semantics so the thread in the monitor may acquire it multiple times without deadlock, allowing recursion and calling other MX functions. |
---|
85 | For robustness, \CFA monitors ensure the monitor lock is released regardless of how an acquiring function ends, normal or exceptional, and returning a shared variable is safe via copying before the lock is released. |
---|
86 | Monitor objects can be passed through multiple helper functions without acquiring mutual exclusion, until a designated function associated with the object is called. |
---|
87 | \CFA functions are designated MX by one or more pointer/reference parameters having qualifier @mutex@. |
---|
88 | Java members are designated MX with \lstinline[language=java]{synchronized}, which applies only to the implicit receiver parameter. |
---|
89 | In the example, the increment and setter operations need mutual exclusion, while the read-only getter operation is not MX because reading an integer is atomic. |
---|
90 | |
---|
91 | As stated, the non-object-oriented nature of \CFA monitors allows a function to acquire multiple mutex objects. |
---|
92 | For example, the bank-transfer problem requires locking two bank accounts to safely debit and credit money between accounts. |
---|
93 | \begin{cfa} |
---|
94 | monitor BankAccount { |
---|
95 | int balance; |
---|
96 | }; |
---|
97 | void deposit( BankAccount & mutex b, int deposit ) with( b ) { |
---|
98 | balance += deposit; |
---|
99 | } |
---|
100 | void transfer( BankAccount & mutex my, BankAccount & mutex your, int me2you ) { |
---|
101 | deposit( my, -me2you ); $\C{// debit}$ |
---|
102 | deposit( your, me2you ); $\C{// credit}$ |
---|
103 | } |
---|
104 | \end{cfa} |
---|
105 | The \CFA monitor implementation ensures multi-lock acquisition is done in a deadlock-free manner regardless of the number of MX parameters and monitor arguments. |
---|
106 | |
---|
107 | |
---|
108 | \section{\lstinline{mutex} statement} |
---|
109 | Restricting implicit lock acquisition to function entry and exit can be awkward for certain problems. |
---|
110 | To increase locking flexibility, some languages introduce a mutex statement. |
---|
111 | \VRef[Figure]{f:ReadersWriter} shows the outline of a reader/writer lock written as a \CFA monitor and mutex statements. |
---|
112 | (The exact lock implement is irrelevant.) |
---|
113 | The @read@ and @write@ functions are called with a reader/write lock and any arguments to perform reading or writing. |
---|
114 | The @read@ function is not MX because multiple readers can read simultaneously. |
---|
115 | MX is acquired within @read@ by calling the (nested) helper functions @StartRead@ and @EndRead@ or executing the mutex statements. |
---|
116 | Between the calls or statements, reads can execute simultaneous within the body of @read@. |
---|
117 | The @write@ function does not require refactoring because writing is a CS. |
---|
118 | The mutex-statement version is better because it has fewer names, less argument/parameter passing, and can possibly hold MX for a shorter duration. |
---|
119 | |
---|
120 | \begin{figure} |
---|
121 | \centering |
---|
122 | |
---|
123 | \begin{lrbox}{\myboxA} |
---|
124 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
125 | monitor RWlock { ... }; |
---|
126 | void read( RWlock & rw, ... ) { |
---|
127 | void StartRead( RWlock & @mutex@ rw ) { ... } |
---|
128 | void EndRead( RWlock & @mutex@ rw ) { ... } |
---|
129 | StartRead( rw ); |
---|
130 | ... // read without MX |
---|
131 | EndRead( rw ); |
---|
132 | } |
---|
133 | void write( RWlock & @mutex@ rw, ... ) { |
---|
134 | ... // write with MX |
---|
135 | } |
---|
136 | \end{cfa} |
---|
137 | \end{lrbox} |
---|
138 | |
---|
139 | \begin{lrbox}{\myboxB} |
---|
140 | \begin{cfa}[aboveskip=0pt,belowskip=0pt] |
---|
141 | |
---|
142 | void read( RWlock & rw, ... ) { |
---|
143 | |
---|
144 | |
---|
145 | @mutex@( rw ) { ... } |
---|
146 | ... // read without MX |
---|
147 | @mutex@{ rw ) { ... } |
---|
148 | } |
---|
149 | void write( RWlock & @mutex@ rw, ... ) { |
---|
150 | ... // write with MX |
---|
151 | } |
---|
152 | \end{cfa} |
---|
153 | \end{lrbox} |
---|
154 | |
---|
155 | \subfloat[monitor]{\label{f:RWmonitor}\usebox\myboxA} |
---|
156 | \hspace*{3pt} |
---|
157 | \vrule |
---|
158 | \hspace*{3pt} |
---|
159 | \subfloat[mutex statement]{\label{f:RWmutexstmt}\usebox\myboxB} |
---|
160 | \caption{Readers writer problem} |
---|
161 | \label{f:ReadersWriter} |
---|
162 | \end{figure} |
---|
163 | |
---|
164 | This work adds a mutex statement to \CFA, but generalizes it beyond implicit monitor locks. |
---|
165 | In detail, the mutex statement has a clause and statement block, similar to a conditional or loop statement. |
---|
166 | The clause accepts any number of lockable objects (like a \CFA MX function prototype), and locks them for the duration of the statement. |
---|
167 | The locks are acquired in a deadlock free manner and released regardless of how control-flow exits the statement. |
---|
168 | The mutex statement provides easy lock usage in the common case of lexically wrapping a CS. |
---|
169 | Examples of \CFA mutex statement are shown in \VRef[Listing]{l:cfa_mutex_ex}. |
---|
170 | |
---|
171 | \begin{cfa}[caption={\CFA mutex statement usage},label={l:cfa_mutex_ex}] |
---|
172 | owner_lock lock1, lock2, lock3; |
---|
173 | @mutex@( lock2, lock3 ) ...; $\C{// inline statement}$ |
---|
174 | @mutex@( lock1, lock2, lock3 ) { ... } $\C{// statement block}$ |
---|
175 | void transfer( BankAccount & my, BankAccount & your, int me2you ) { |
---|
176 | ... // check values, no MX |
---|
177 | @mutex@( my, your ) { // MX is shorter duration that function body |
---|
178 | deposit( my, -me2you ); $\C{// debit}$ |
---|
179 | deposit( your, me2you ); $\C{// credit}$ |
---|
180 | } |
---|
181 | } |
---|
182 | \end{cfa} |
---|
183 | |
---|
184 | \section{Other Languages} |
---|
185 | There are similar constructs to the mutex statement in other programming languages. |
---|
186 | Java has a feature called a synchronized statement, which looks like the \CFA's mutex statement, but only accepts a single object in the clause and only handles monitor locks. |
---|
187 | The \CC standard library has a @scoped_lock@, which is also similar to the mutex statement. |
---|
188 | The @scoped_lock@ takes any number of locks in its constructor, and acquires them in a deadlock-free manner. |
---|
189 | It then releases them when the @scoped_lock@ object is deallocated using \gls{raii}. |
---|
190 | An example of \CC @scoped_lock@ is shown in \VRef[Listing]{l:cc_scoped_lock}. |
---|
191 | |
---|
192 | \begin{cfa}[caption={\CC \lstinline{scoped_lock} usage},label={l:cc_scoped_lock}] |
---|
193 | struct BankAccount { |
---|
194 | @recursive_mutex m;@ $\C{// must be recursive}$ |
---|
195 | int balance = 0; |
---|
196 | }; |
---|
197 | void deposit( BankAccount & b, int deposit ) { |
---|
198 | @scoped_lock lock( b.m );@ $\C{// RAII acquire}$ |
---|
199 | b.balance += deposit; |
---|
200 | } $\C{// RAII release}$ |
---|
201 | void transfer( BankAccount & my, BankAccount & your, int me2you ) { |
---|
202 | @scoped_lock lock( my.m, your.m );@ $\C{// RAII acquire}$ |
---|
203 | deposit( my, -me2you ); $\C{// debit}$ |
---|
204 | deposit( your, me2you ); $\C{// credit}$ |
---|
205 | } $\C{// RAII release}$ |
---|
206 | \end{cfa} |
---|
207 | |
---|
208 | \section{\CFA implementation} |
---|
209 | The \CFA mutex statement takes some ideas from both the Java and \CC features. |
---|
210 | Like Java, \CFA introduces a new statement rather than building from existing language features. |
---|
211 | (\CFA has sufficient language features to mimic \CC RAII locking.) |
---|
212 | This syntactic choice makes MX explicit rather than implicit via object declarations. |
---|
213 | Hence, it is easier for programmers and language tools to identify MX points in a program, \eg scan for all @mutex@ parameters and statements in a body of code. |
---|
214 | Furthermore, concurrent safety is provided across an entire program for the complex operation of acquiring multiple locks in a deadlock-free manner. |
---|
215 | Unlike Java, \CFA's mutex statement and \CC's @scoped_lock@ both use parametric polymorphism to allow user defined types to work with this feature. |
---|
216 | In this case, the polymorphism allows a locking mechanism to acquire MX over an object without having to know the object internals or what kind of lock it is using. |
---|
217 | \CFA's provides and uses this locking trait: |
---|
218 | \begin{cfa} |
---|
219 | forall( L & | sized(L) ) |
---|
220 | trait is_lock { |
---|
221 | void lock( L & ); |
---|
222 | void unlock( L & ); |
---|
223 | }; |
---|
224 | \end{cfa} |
---|
225 | \CC @scoped_lock@ has this trait implicitly based on functions accessed in a template. |
---|
226 | @scoped_lock@ also requires @try_lock@ because of its technique for deadlock avoidance \see{\VRef{s:DeadlockAvoidance}}. |
---|
227 | |
---|
228 | The following shows how the @mutex@ statement is used with \CFA streams to eliminate unpredictable results when printing in a concurrent program. |
---|
229 | For example, if two threads execute: |
---|
230 | \begin{cfa} |
---|
231 | thread$\(_1\)$ : sout | "abc" | "def"; |
---|
232 | thread$\(_2\)$ : sout | "uvw" | "xyz"; |
---|
233 | \end{cfa} |
---|
234 | any of the outputs can appear, included a segment fault due to I/O buffer corruption: |
---|
235 | \begin{cquote} |
---|
236 | \small\tt |
---|
237 | \begin{tabular}{@{}l|l|l|l|l@{}} |
---|
238 | abc def & abc uvw xyz & uvw abc xyz def & abuvwc dexf & uvw abc def \\ |
---|
239 | uvw xyz & def & & yz & xyz |
---|
240 | \end{tabular} |
---|
241 | \end{cquote} |
---|
242 | The stream type for @sout@ is defined to satisfy the @is_lock@ trait, so the @mutex@ statement can be used to lock an output stream while producing output. |
---|
243 | From the programmer's perspective, it is sufficient to know an object can be locked and then any necessary MX is easily available via the @mutex@ statement. |
---|
244 | This ability improves safety and programmer productivity since it abstracts away the concurrent details. |
---|
245 | Hence, a programmer can easily protect cascaded I/O expressions: |
---|
246 | \begin{cfa} |
---|
247 | thread$\(_1\)$ : mutex( sout ) sout | "abc" | "def"; |
---|
248 | thread$\(_2\)$ : mutex( sout ) sout | "uvw" | "xyz"; |
---|
249 | \end{cfa} |
---|
250 | constraining the output to two different lines in either order: |
---|
251 | \begin{cquote} |
---|
252 | \small\tt |
---|
253 | \begin{tabular}{@{}l|l@{}} |
---|
254 | abc def & uvw xyz \\ |
---|
255 | uvw xyz & abc def |
---|
256 | \end{tabular} |
---|
257 | \end{cquote} |
---|
258 | where this level of safe nondeterministic output is acceptable. |
---|
259 | Alternatively, multiple I/O statements can be protected using the mutex statement block: |
---|
260 | \begin{cfa} |
---|
261 | mutex( sout ) { // acquire stream lock for sout for block duration |
---|
262 | sout | "abc"; |
---|
263 | mutex( sout ) sout | "uvw" | "xyz"; // OK because sout lock is recursive |
---|
264 | sout | "def"; |
---|
265 | } // implicitly release sout lock |
---|
266 | \end{cfa} |
---|
267 | The inner lock acquire is likely to occur through a function call that does a thread-safe print. |
---|
268 | |
---|
269 | \section{Deadlock Avoidance}\label{s:DeadlockAvoidance} |
---|
270 | The mutex statement uses the deadlock avoidance technique of lock ordering, where the circular-wait condition of a deadlock cannot occur if all locks are acquired in the same order. |
---|
271 | The @scoped_lock@ uses a deadlock avoidance algorithm where all locks after the first are acquired using @try_lock@ and if any of the lock attempts fail, all acquired locks are released. |
---|
272 | This repeats after selecting a new starting point in a cyclic manner until all locks are acquired successfully. |
---|
273 | This deadlock avoidance algorithm is shown in Listing~\ref{l:cc_deadlock_avoid}. |
---|
274 | The algorithm is taken directly from the source code of the @<mutex>@ header, with some renaming and comments for clarity. |
---|
275 | |
---|
276 | \begin{cfa}[caption={\CC \lstinline{scoped_lock} deadlock avoidance algorithm},label={l:cc_deadlock_avoid}] |
---|
277 | int first = 0; // first lock to attempt to lock |
---|
278 | do { |
---|
279 | // locks is the array of locks to acquire |
---|
280 | locks[first].lock(); $\C{// lock first lock}$ |
---|
281 | for ( int i = 1; i < Num_Locks; i += 1 ) { $\C{// iterate over rest of locks}$ |
---|
282 | const int idx = (first + i) % Num_Locks; |
---|
283 | if ( ! locks[idx].try_lock() ) { $\C{// try lock each one}$ |
---|
284 | for ( int j = i; j != 0; j -= 1 ) $\C{// release all locks}$ |
---|
285 | locks[(first + j - 1) % Num_Locks].unlock(); |
---|
286 | first = idx; $\C{// rotate which lock to acquire first}$ |
---|
287 | break; |
---|
288 | } |
---|
289 | } |
---|
290 | // if first lock is still held then all have been acquired |
---|
291 | } while ( ! locks[first].owns_lock() ); $\C{// is first lock held?}$ |
---|
292 | \end{cfa} |
---|
293 | |
---|
294 | While the algorithm in \ref{l:cc_deadlock_avoid} successfully avoids deadlock, there is a livelock scenario. |
---|
295 | Assume two threads, $A$ and $B$, create a @scoped_lock@ accessing two locks, $L1$ and $L2$. |
---|
296 | A livelock can form as follows. |
---|
297 | Thread $A$ creates a @scoped_lock@ with arguments $L1$, $L2$, and $B$ creates a scoped lock with the lock arguments in the opposite order $L2$, $L1$. |
---|
298 | Both threads acquire the first lock in their order and then fail the @try_lock@ since the other lock is held. |
---|
299 | Both threads then reset their starting lock to be their second lock and try again. |
---|
300 | This time $A$ has order $L2$, $L1$, and $B$ has order $L1$, $L2$, which is identical to the starting setup but with the ordering swapped between threads. |
---|
301 | If the threads perform this action in lock-step, they cycle indefinitely without entering the CS, \ie livelock. |
---|
302 | Hence, to use @scoped_lock@ safely, a programmer must manually construct and maintain a global ordering of lock arguments passed to @scoped_lock@. |
---|
303 | |
---|
304 | The lock ordering algorithm used in \CFA mutex functions and statements is deadlock and livelock free. |
---|
305 | The algorithm uses the lock memory addresses as keys, sorts the keys, and then acquires the locks in sorted order. |
---|
306 | For fewer than 7 locks ($2^3-1$), the sort is unrolled performing the minimum number of compare and swaps for the given number of locks; |
---|
307 | for 7 or more locks, insertion sort is used. |
---|
308 | Since it is extremely rare to hold more than 6 locks at a time, the algorithm is fast and executes in $O(1)$ time. |
---|
309 | Furthermore, lock addresses are unique across program execution, even for dynamically allocated locks, so the algorithm is safe across the entire program execution. |
---|
310 | |
---|
311 | The downside to the sorting approach is that it is not fully compatible with manual usages of the same locks outside the @mutex@ statement, \ie the lock are acquired without using the @mutex@ statement. |
---|
312 | The following scenario is a classic deadlock. |
---|
313 | \begin{cquote} |
---|
314 | \begin{tabular}{@{}l@{\hspace{30pt}}l@{}} |
---|
315 | \begin{cfa} |
---|
316 | lock L1, L2; // assume &L1 < &L2 |
---|
317 | $\textbf{thread\(_1\)}$ |
---|
318 | acquire( L2 ); |
---|
319 | acquire( L1 ); |
---|
320 | CS |
---|
321 | release( L1 ); |
---|
322 | release( L2 ); |
---|
323 | \end{cfa} |
---|
324 | & |
---|
325 | \begin{cfa} |
---|
326 | |
---|
327 | $\textbf{thread\(_2\)}$ |
---|
328 | mutex( L1, L2 ) { |
---|
329 | |
---|
330 | CS |
---|
331 | |
---|
332 | } |
---|
333 | \end{cfa} |
---|
334 | \end{tabular} |
---|
335 | \end{cquote} |
---|
336 | Comparatively, if the @scoped_lock@ is used and the same locks are acquired elsewhere, there is no concern of the @scoped_lock@ deadlocking, due to its avoidance scheme, but it may livelock. |
---|
337 | The convenience and safety of the @mutex@ statement, \eg guaranteed lock release with exceptions, should encourage programmers to always use it for locking, mitigating any deadlock scenario. |
---|
338 | |
---|
339 | \section{Performance} |
---|
340 | Given the two multi-acquisition algorithms in \CC and \CFA, each with differing advantages and disadvantages, it interesting to compare their performance. |
---|
341 | Comparison with Java is not possible, since it only takes a single lock. |
---|
342 | |
---|
343 | The comparison starts with a baseline that acquires the locks directly without a mutex statement or @scoped_lock@ in a fixed ordering and then releases them. |
---|
344 | The baseline helps highlight the cost of the deadlock avoidance/prevention algorithms for each implementation. |
---|
345 | |
---|
346 | The benchmark used to evaluate the avoidance algorithms repeatedly acquires a fixed number of locks in a random order and then releases them. |
---|
347 | The pseudo code for the deadlock avoidance benchmark is shown in \VRef[Listing]{l:deadlock_avoid_pseudo}. |
---|
348 | To ensure the comparison exercises the implementation of each lock avoidance algorithm, an identical spinlock is implemented in each language using a set of builtin atomics available in both \CC and \CFA. |
---|
349 | The benchmarks are run for a fixed duration of 10 seconds and then terminate. |
---|
350 | The total number of times the group of locks is acquired is returned for each thread. |
---|
351 | Each variation is run 11 times on 2, 4, 8, 16, 24, 32 cores and with 2, 4, and 8 locks being acquired. |
---|
352 | The median is calculated and is plotted alongside the 95\% confidence intervals for each point. |
---|
353 | |
---|
354 | \begin{cfa}[caption={Deadlock avoidance bendchmark pseudo code},label={l:deadlock_avoid_pseudo}] |
---|
355 | |
---|
356 | |
---|
357 | |
---|
358 | $\PAB{// add pseudo code}$ |
---|
359 | |
---|
360 | |
---|
361 | |
---|
362 | \end{cfa} |
---|
363 | |
---|
364 | The performance experiments were run on the following multi-core hardware systems to determine differences across platforms: |
---|
365 | \begin{list}{\arabic{enumi}.}{\usecounter{enumi}\topsep=5pt\parsep=5pt\itemsep=0pt} |
---|
366 | % sudo dmidecode -t system |
---|
367 | \item |
---|
368 | Supermicro AS--1123US--TR4 AMD EPYC 7662 64--core socket, hyper-threading $\times$ 2 sockets (256 processing units) 2.0 GHz, TSO memory model, running Linux v5.8.0--55--generic, gcc--10 compiler |
---|
369 | \item |
---|
370 | Supermicro SYS--6029U--TR4 Intel Xeon Gold 5220R 24--core socket, hyper-threading $\times$ 2 sockets (48 processing units) 2.2GHz, TSO memory model, running Linux v5.8.0--59--generic, gcc--10 compiler |
---|
371 | \end{list} |
---|
372 | %The hardware architectures are different in threading (multithreading vs hyper), cache structure (MESI or MESIF), NUMA layout (QPI vs HyperTransport), memory model (TSO vs WO), and energy/thermal mechanisms (turbo-boost). |
---|
373 | %Software that runs well on one architecture may run poorly or not at all on another. |
---|
374 | |
---|
375 | Figure~\ref{f:mutex_bench} shows the results of the benchmark experiments. |
---|
376 | \PAB{Make the points in the graphs for each line different. |
---|
377 | Also, make the text in the graphs larger.} |
---|
378 | The baseline results for both languages are mostly comparable, except for the 8 locks results in \ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel}, where the \CFA baseline is slightly slower. |
---|
379 | The avoidance result for both languages is significantly different, where \CFA's mutex statement achieves throughput that is magnitudes higher than \CC's @scoped_lock@. |
---|
380 | The slowdown for @scoped_lock@ is likely due to its deadlock-avoidance implementation. |
---|
381 | Since it uses a retry based mechanism, it can take a long time for threads to progress. |
---|
382 | Additionally the potential for livelock in the algorithm can result in very little throughput under high contention. |
---|
383 | For example, on the AMD machine with 32 threads and 8 locks, the benchmarks would occasionally livelock indefinitely, with no threads making any progress for 3 hours before the experiment was terminated manually. |
---|
384 | It is likely that shorter bouts of livelock occurred in many of the experiments, which would explain large confidence intervals for some of the data points in the \CC data. |
---|
385 | In Figures~\ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel} the mutex statement performs better than the baseline. |
---|
386 | At 7 locks and above the mutex statement switches from a hard coded sort to insertion sort. |
---|
387 | It is likely that the improvement in throughput compared to baseline is due to the time spent in the insertion sort, which decreases contention on the locks. |
---|
388 | |
---|
389 | \begin{figure} |
---|
390 | \centering |
---|
391 | \subfloat[AMD]{ |
---|
392 | \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Aggregate_Lock_2.pgf}} |
---|
393 | } |
---|
394 | \subfloat[Intel]{ |
---|
395 | \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Aggregate_Lock_2.pgf}} |
---|
396 | } |
---|
397 | |
---|
398 | \subfloat[AMD]{ |
---|
399 | \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Aggregate_Lock_4.pgf}} |
---|
400 | } |
---|
401 | \subfloat[Intel]{ |
---|
402 | \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Aggregate_Lock_4.pgf}} |
---|
403 | } |
---|
404 | |
---|
405 | \subfloat[AMD]{ |
---|
406 | \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Aggregate_Lock_8.pgf}} |
---|
407 | \label{f:mutex_bench8_AMD} |
---|
408 | } |
---|
409 | \subfloat[Intel]{ |
---|
410 | \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Aggregate_Lock_8.pgf}} |
---|
411 | \label{f:mutex_bench8_Intel} |
---|
412 | } |
---|
413 | \caption{The aggregate lock benchmark comparing \CC \lstinline{scoped_lock} and \CFA mutex statement throughput (higher is better).} |
---|
414 | \label{f:mutex_bench} |
---|
415 | \end{figure} |
---|
416 | |
---|
417 | % Local Variables: % |
---|
418 | % tab-width: 4 % |
---|
419 | % End: % |
---|