% ====================================================================== % ====================================================================== \chapter{Mutex Statement}\label{s:mutexstmt} % ====================================================================== % ====================================================================== The mutual exclusion problem was introduced by Dijkstra in 1965~\cite{Dijkstra65,Dijkstra65a}: there are several concurrent processes or threads that communicate by shared variables and from time to time need exclusive access to shared resources. A shared resource and code manipulating it form a pairing called a \Newterm{critical section (CS)}, which is a many-to-one relationship; \eg if multiple files are being written to by multiple threads, only the pairings of simultaneous writes to the same files are CSs. Regions of code where the thread is not interested in the resource are combined into the \Newterm{non-critical section (NCS)}. Exclusive access to a resource is provided by \Newterm{mutual exclusion (MX)}. MX is implemented by some form of \emph{lock}, where the CS is bracketed by lock procedures @acquire@ and @release@. Threads execute a loop of the form: \begin{cfa} loop of $thread$ p: NCS; acquire( lock ); CS; release( lock ); // protected critical section with MX end loop. \end{cfa} MX guarantees there is never more than one thread in the CS. MX must also guarantee eventual progress: when there are competing threads attempting access, eventually some competing thread succeeds, \ie acquires the CS, releases it, and returns to the NCS. % Lamport \cite[p.~329]{Lam86mx} extends this requirement to the exit protocol. A stronger constraint is that every thread that calls @acquire@ eventually succeeds after some reasonable bounded time. \section{Monitor} \CFA provides a high-level locking object, called a \Newterm{monitor}, an elegant and efficient abstraction for providing mutual exclusion and synchronization for shared-memory systems. First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}. Several concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, \uC~\cite{Buhr92a} and Java~\cite{Java}. In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to manually implement a monitor. Figure~\ref{f:AtomicCounter} shows a \CFA and Java monitor implementing an atomic counter. A \Newterm{monitor} is a programming technique that implicitly binds mutual exclusion to static function scope by call and return. In contrast, lock mutual exclusion, defined by acquire/release calls, is independent of lexical context (analogous to stack versus heap storage allocation). Restricting acquire and release points in a monitor eases programming, comprehension, and maintenance, at a slight cost in flexibility and efficiency. Ultimately, a monitor is implemented using a combination of basic locks and atomic instructions. \begin{figure} \centering \begin{lrbox}{\myboxA} \begin{cfa}[aboveskip=0pt,belowskip=0pt] @monitor@ Aint { int cnt; }; int ++?( Aint & @mutex@ m ) { return ++m.cnt; } int ?=?( Aint & @mutex@ l, int r ) { l.cnt = r; } int ?=?(int & l, Aint & r) { l = r.cnt; } int i = 0, j = 0; Aint x = { 0 }, y = { 0 }; $\C[1.5in]{// no mutex}$ ++x; ++y; $\C{// mutex}$ x = 2; y = i; $\C{// mutex}$ i = x; j = y; $\C{// no mutex}\CRT$ \end{cfa} \end{lrbox} \begin{lrbox}{\myboxB} \begin{java}[aboveskip=0pt,belowskip=0pt] class Aint { private int cnt; public Aint( int init ) { cnt = init; } @synchronized@ public int inc() { return ++cnt; } @synchronized@ public void set( int r ) {cnt = r;} public int get() { return cnt; } } int i = 0, j = 0; Aint x = new Aint( 0 ), y = new Aint( 0 ); x.inc(); y.inc(); x.set( 2 ); y.set( i ); i = x.get(); j = y.get(); \end{java} \end{lrbox} \subfloat[\CFA]{\label{f:AtomicCounterCFA}\usebox\myboxA} \hspace*{3pt} \vrule \hspace*{3pt} \subfloat[Java]{\label{f:AtomicCounterJava}\usebox\myboxB} \caption{Atomic integer counter} \label{f:AtomicCounter} \end{figure} Like Java, \CFA monitors have \Newterm{multi-acquire} semantics so the thread in the monitor may acquire it multiple times without deadlock, allowing recursion and calling of other MX functions. For robustness, \CFA monitors ensure the monitor lock is released regardless of how an acquiring function ends, normal or exceptional, and returning a shared variable is safe via copying before the lock is released. Monitor objects can be passed through multiple helper functions without acquiring mutual exclusion, until a designated function associated with the object is called. \CFA functions are designated MX by one or more pointer/reference parameters having qualifier @mutex@. Java members are designated MX with \lstinline[language=java]{synchronized}, which applies only to the implicit receiver parameter. In the example above, the increment and setter operations need mutual exclusion, while the read-only getter operation is not MX because reading an integer is atomic. As stated, the non-object-oriented nature of \CFA monitors allows a function to acquire multiple mutex objects. For example, the bank-transfer problem requires locking two bank accounts to safely debit and credit money between accounts. \begin{cfa} monitor BankAccount { int balance; }; void deposit( BankAccount & mutex b, int deposit ) with( b ) { balance += deposit; } void transfer( BankAccount & mutex my, BankAccount & mutex your, int me2you ) { deposit( my, -me2you ); $\C{// debit}$ deposit( your, me2you ); $\C{// credit}$ } \end{cfa} The \CFA monitor implementation ensures multi-lock acquisition is done in a deadlock-free manner regardless of the number of MX parameters and monitor arguments. It it important to note that \CFA monitors do not attempt to solve the nested monitor problem~\cite{Lister77}. \section{\lstinline{mutex} statement} Restricting implicit lock acquisition to function entry and exit can be awkward for certain problems. To increase locking flexibility, some languages introduce a mutex statement. \VRef[Figure]{f:ReadersWriter} shows the outline of a reader/writer lock written as a \CFA monitor and mutex statements. (The exact lock implementation is irrelevant.) The @read@ and @write@ functions are called with a reader/writer lock and any arguments to perform reading or writing. The @read@ function is not MX because multiple readers can read simultaneously. MX is acquired within @read@ by calling the (nested) helper functions @StartRead@ and @EndRead@ or executing the mutex statements. Between the calls or statements, reads can execute simultaneous within the body of @read@. The @write@ function does not require refactoring because writing is a CS. The mutex-statement version is better because it has fewer names, less argument/parameter passing, and can possibly hold MX for a shorter duration. \begin{figure} \centering \begin{lrbox}{\myboxA} \begin{cfa}[aboveskip=0pt,belowskip=0pt] monitor RWlock { ... }; void read( RWlock & rw, ... ) { void StartRead( RWlock & @mutex@ rw ) { ... } void EndRead( RWlock & @mutex@ rw ) { ... } StartRead( rw ); ... // read without MX EndRead( rw ); } void write( RWlock & @mutex@ rw, ... ) { ... // write with MX } \end{cfa} \end{lrbox} \begin{lrbox}{\myboxB} \begin{cfa}[aboveskip=0pt,belowskip=0pt] void read( RWlock & rw, ... ) { @mutex@( rw ) { ... } ... // read without MX @mutex@{ rw ) { ... } } void write( RWlock & @mutex@ rw, ... ) { ... // write with MX } \end{cfa} \end{lrbox} \subfloat[monitor]{\label{f:RWmonitor}\usebox\myboxA} \hspace*{3pt} \vrule \hspace*{3pt} \subfloat[mutex statement]{\label{f:RWmutexstmt}\usebox\myboxB} \caption{Readers writer problem} \label{f:ReadersWriter} \end{figure} This work adds a mutex statement to \CFA, but generalizes it beyond implicit monitor locks. In detail, the mutex statement has a clause and statement block, similar to a conditional or loop statement. The clause accepts any number of lockable objects (like a \CFA MX function prototype), and locks them for the duration of the statement. The locks are acquired in a deadlock free manner and released regardless of how control-flow exits the statement. The mutex statement provides easy lock usage in the common case of lexically wrapping a CS. Examples of \CFA mutex statement are shown in \VRef[Listing]{l:cfa_mutex_ex}. \begin{cfa}[caption={\CFA mutex statement usage},label={l:cfa_mutex_ex}] owner_lock lock1, lock2, lock3; @mutex@( lock2, lock3 ) ...; $\C{// inline statement}$ @mutex@( lock1, lock2, lock3 ) { ... } $\C{// statement block}$ void transfer( BankAccount & my, BankAccount & your, int me2you ) { ... // check values, no MX @mutex@( my, your ) { // MX is shorter duration that function body deposit( my, -me2you ); $\C{// debit}$ deposit( your, me2you ); $\C{// credit}$ } } \end{cfa} \section{Other Languages} There are similar constructs to the mutex statement in other programming languages. Java has a feature called a synchronized statement, which looks like the \CFA's mutex statement, but only accepts a single object in the clause and only handles monitor locks. The \CC standard library has a @scoped_lock@, which is also similar to the mutex statement. The @scoped_lock@ takes any number of locks in its constructor, and acquires them in a deadlock-free manner. It then releases them when the @scoped_lock@ object is deallocated using \gls{raii}. An example of \CC @scoped_lock@ is shown in \VRef[Listing]{l:cc_scoped_lock}. \begin{cfa}[caption={\CC \lstinline{scoped_lock} usage},label={l:cc_scoped_lock}] struct BankAccount { @recursive_mutex m;@ $\C{// must be recursive}$ int balance = 0; }; void deposit( BankAccount & b, int deposit ) { @scoped_lock lock( b.m );@ $\C{// RAII acquire}$ b.balance += deposit; } $\C{// RAII release}$ void transfer( BankAccount & my, BankAccount & your, int me2you ) { @scoped_lock lock( my.m, your.m );@ $\C{// RAII acquire}$ deposit( my, -me2you ); $\C{// debit}$ deposit( your, me2you ); $\C{// credit}$ } $\C{// RAII release}$ \end{cfa} \section{\CFA implementation} The \CFA mutex statement takes some ideas from both the Java and \CC features. Like Java, \CFA introduces a new statement rather than building from existing language features, although \CFA has sufficient language features to mimic \CC RAII locking. This syntactic choice makes MX explicit rather than implicit via object declarations. Hence, it is easier for programmers and language tools to identify MX points in a program, \eg scan for all @mutex@ parameters and statements in a body of code. Furthermore, concurrent safety is provided across an entire program for the complex operation of acquiring multiple locks in a deadlock-free manner. Unlike Java, \CFA's mutex statement and \CC's @scoped_lock@ both use parametric polymorphism to allow user defined types to work with this feature. In this case, the polymorphism allows a locking mechanism to acquire MX over an object without having to know the object internals or what kind of lock it is using. \CFA's provides and uses this locking trait: \begin{cfa} forall( L & | sized(L) ) trait is_lock { void lock( L & ); void unlock( L & ); }; \end{cfa} \CC @scoped_lock@ has this trait implicitly based on functions accessed in a template. @scoped_lock@ also requires @try_lock@ because of its technique for deadlock avoidance \see{\VRef{s:DeadlockAvoidance}}. The following shows how the @mutex@ statement is used with \CFA streams to eliminate unpredictable results when printing in a concurrent program. For example, if two threads execute: \begin{cfa} thread$\(_1\)$ : sout | "abc" | "def"; thread$\(_2\)$ : sout | "uvw" | "xyz"; \end{cfa} any of the outputs can appear, included a segment fault due to I/O buffer corruption: \begin{cquote} \small\tt \begin{tabular}{@{}l|l|l|l|l@{}} abc def & abc uvw xyz & uvw abc xyz def & abuvwc dexf & uvw abc def \\ uvw xyz & def & & yz & xyz \end{tabular} \end{cquote} The stream type for @sout@ is defined to satisfy the @is_lock@ trait, so the @mutex@ statement can be used to lock an output stream while producing output. From the programmer's perspective, it is sufficient to know an object can be locked and then any necessary MX is easily available via the @mutex@ statement. This ability improves safety and programmer productivity since it abstracts away the concurrent details. Hence, a programmer can easily protect cascaded I/O expressions: \begin{cfa} thread$\(_1\)$ : mutex( sout ) sout | "abc" | "def"; thread$\(_2\)$ : mutex( sout ) sout | "uvw" | "xyz"; \end{cfa} constraining the output to two different lines in either order: \begin{cquote} \small\tt \begin{tabular}{@{}l|l@{}} abc def & uvw xyz \\ uvw xyz & abc def \end{tabular} \end{cquote} where this level of safe nondeterministic output is acceptable. Alternatively, multiple I/O statements can be protected using the mutex statement block: \begin{cfa} mutex( sout ) { // acquire stream lock for sout for block duration sout | "abc"; mutex( sout ) sout | "uvw" | "xyz"; // OK because sout lock is recursive sout | "def"; } // implicitly release sout lock \end{cfa} The inner lock acquire is likely to occur through a function call that does a thread-safe print. \section{Deadlock Avoidance}\label{s:DeadlockAvoidance} The mutex statement uses the deadlock avoidance technique of lock ordering, where the circular-wait condition of a deadlock cannot occur if all locks are acquired in the same order. The @scoped_lock@ uses a deadlock avoidance algorithm where all locks after the first are acquired using @try_lock@ and if any of the lock attempts fail, all acquired locks are released. This repeats after selecting a new starting point in a cyclic manner until all locks are acquired successfully. This deadlock avoidance algorithm is shown in Figure~\ref{f:cc_deadlock_avoid}. The algorithm is taken directly from the source code of the @@ header, with some renaming and comments for clarity. \begin{figure} \begin{cfa} int first = 0; // first lock to attempt to lock do { // locks is the array of locks to acquire locks[first].lock(); $\C{// lock first lock}$ for ( int i = 1; i < Num_Locks; i += 1 ) { $\C{// iterate over rest of locks}$ const int idx = (first + i) % Num_Locks; if ( ! locks[idx].try_lock() ) { $\C{// try lock each one}$ for ( int j = i; j != 0; j -= 1 ) $\C{// release all locks}$ locks[(first + j - 1) % Num_Locks].unlock(); first = idx; $\C{// rotate which lock to acquire first}$ break; } } // if first lock is still held then all have been acquired } while ( ! locks[first].owns_lock() ); $\C{// is first lock held?}$ \end{cfa} \caption{\CC \lstinline{scoped_lock} deadlock avoidance algorithm} \label{f:cc_deadlock_avoid} \end{figure} While this algorithm successfully avoids deadlock, there is a livelock scenario. Assume two threads, $A$ and $B$, create a @scoped_lock@ accessing two locks, $L1$ and $L2$. A livelock can form as follows. Thread $A$ creates a @scoped_lock@ with arguments $L1$, $L2$, and $B$ creates a scoped lock with the lock arguments in the opposite order $L2$, $L1$. Both threads acquire the first lock in their order and then fail the @try_lock@ since the other lock is held. Both threads then reset their starting lock to be their second lock and try again. This time $A$ has order $L2$, $L1$, and $B$ has order $L1$, $L2$, which is identical to the starting setup but with the ordering swapped between threads. If the threads perform this action in lock-step, they cycle indefinitely without entering the CS, \ie livelock. Hence, to use @scoped_lock@ safely, a programmer must manually construct and maintain a global ordering of lock arguments passed to @scoped_lock@. The lock ordering algorithm used in \CFA mutex functions and statements is deadlock and livelock free. The algorithm uses the lock memory addresses as keys, sorts the keys, and then acquires the locks in sorted order. For fewer than 7 locks ($2^3-1$), the sort is unrolled performing the minimum number of compare and swaps for the given number of locks; for 7 or more locks, insertion sort is used. Since it is extremely rare to hold more than 6 locks at a time, the algorithm is fast and executes in $O(1)$ time. Furthermore, lock addresses are unique across program execution, even for dynamically allocated locks, so the algorithm is safe across the entire program execution. The downside to the sorting approach is that it is not fully compatible with manual usages of the same locks outside the @mutex@ statement, \ie the lock are acquired without using the @mutex@ statement. The following scenario is a classic deadlock. \begin{cquote} \begin{tabular}{@{}l@{\hspace{30pt}}l@{}} \begin{cfa} lock L1, L2; // assume &L1 < &L2 $\textbf{thread\(_1\)}$ acquire( L2 ); acquire( L1 ); CS release( L1 ); release( L2 ); \end{cfa} & \begin{cfa} $\textbf{thread\(_2\)}$ mutex( L1, L2 ) { CS } \end{cfa} \end{tabular} \end{cquote} Comparatively, if the @scoped_lock@ is used and the same locks are acquired elsewhere, there is no concern of the @scoped_lock@ deadlocking, due to its avoidance scheme, but it may livelock. The convenience and safety of the @mutex@ statement, \ie guaranteed lock release with exceptions, should encourage programmers to always use it for locking, mitigating any deadlock scenario versus combining manual locking with the mutex statement. Both \CC and the \CFA do not provide any deadlock guarantees for nested @scoped_lock@s or @mutex@ statements. To do so would require solving the nested monitor problem~\cite{Lister77}, which currently does not have any practical solutions. \section{Performance} Given the two multi-acquisition algorithms in \CC and \CFA, each with differing advantages and disadvantages, it interesting to compare their performance. Comparison with Java is not possible, since it only takes a single lock. The comparison starts with a baseline that acquires the locks directly without a mutex statement or @scoped_lock@ in a fixed ordering and then releases them. The baseline helps highlight the cost of the deadlock avoidance/prevention algorithms for each implementation. The benchmark used to evaluate the avoidance algorithms repeatedly acquires a fixed number of locks in a random order and then releases them. The pseudocode for the deadlock avoidance benchmark is shown in \VRef[Figure]{l:deadlock_avoid_pseudo}. To ensure the comparison exercises the implementation of each lock avoidance algorithm, an identical spinlock is implemented in each language using a set of builtin atomics available in both \CC and \CFA. The benchmarks are run for a fixed duration of 10 seconds and then terminate. The total number of times the group of locks is acquired is returned for each thread. Each variation is run 11 times on 2, 4, 8, 16, 24, 32 cores and with 2, 4, and 8 locks being acquired. The median is calculated and is plotted alongside the 95\% confidence intervals for each point. \begin{figure} \begin{cfa} size_t n_locks; $\C{// number of locks}$ size_t n_thds; $\C{// number of threads}$ size_t n_gens; $\C{// number of random orderings (default 100)}$ size_t total = 0; $\C{// global throughput aggregator}$ volatile bool done = false; $\C{// termination flag}$ test_spinlock locks[n_locks]; size_t rands[n_thds][n_locks * n_gens]; $\C{// random ordering per thread}$ void main( worker & w ) with(w) { $\C{// thread main}$ size_t count = 0, idx = 0; while ( !done ) { idx = (count % n_locks * n_gens) * n_locks; $\C{// get start of next sequence}$ mutex(locks[rands[0]], ..., locks[rands[n_locks - 1]]){} $\C{// lock sequence of locks}$ count++; } __atomic_add_fetch(&total, count, __ATOMIC_SEQ_CST); $\C{// atomically add to total}$ } int main( int argc, char * argv[] ) { gen_orders(); $\C{// generate random orderings}$ { worker w[n_thds]; sleep( 10`s ); done = true; } printf( "%lu\n", total ); } \end{cfa} \caption{Deadlock avoidance benchmark pseudocode} \label{l:deadlock_avoid_pseudo} \end{figure} The performance experiments were run on the following multi-core hardware systems to determine differences across platforms: \begin{list}{\arabic{enumi}.}{\usecounter{enumi}\topsep=5pt\parsep=5pt\itemsep=0pt} % sudo dmidecode -t system \item Supermicro AS--1123US--TR4 AMD EPYC 7662 64--core socket, hyper-threading $\times$ 2 sockets (256 processing units) 2.0 GHz, TSO memory model, running Linux v5.8.0--55--generic, gcc--10 compiler \item Supermicro SYS--6029U--TR4 Intel Xeon Gold 5220R 24--core socket, hyper-threading $\times$ 2 sockets (48 processing units) 2.2GHz, TSO memory model, running Linux v5.8.0--59--generic, gcc--10 compiler \end{list} %The hardware architectures are different in threading (multithreading vs hyper), cache structure (MESI or MESIF), NUMA layout (QPI vs HyperTransport), memory model (TSO vs WO), and energy/thermal mechanisms (turbo-boost). %Software that runs well on one architecture may run poorly or not at all on another. Figure~\ref{f:mutex_bench} shows the results of the benchmark experiments. The baseline results for both languages are mostly comparable, except for the 8 locks results in \ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel}, where the \CFA baseline is slightly slower. The avoidance result for both languages is significantly different, where \CFA's mutex statement achieves throughput that is magnitudes higher than \CC's @scoped_lock@. The slowdown for @scoped_lock@ is likely due to its deadlock-avoidance implementation. Since it uses a retry based mechanism, it can take a long time for threads to progress. Additionally the potential for livelock in the algorithm can result in very little throughput under high contention. For example, on the AMD machine with 32 threads and 8 locks, the benchmarks would occasionally livelock indefinitely, with no threads making any progress for 3 hours before the experiment was terminated manually. It is likely that shorter bouts of livelock occurred in many of the experiments, which would explain large confidence intervals for some of the data points in the \CC data. In Figures~\ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel} there is the counter-intuitive result of the mutex statement performing better than the baseline. At 7 locks and above the mutex statement switches from a hard coded sort to insertion sort, which should decrease performance. The hard coded sort is branch-free and constant-time and was verified to be faster than insertion sort for 6 or fewer locks. It is likely the increase in throughput compared to baseline is due to the delay spent in the insertion sort, which decreases contention on the locks. \begin{figure} \centering \captionsetup[subfloat]{labelfont=footnotesize,textfont=footnotesize} \subfloat[AMD]{ \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Aggregate_Lock_2.pgf}} } \subfloat[Intel]{ \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Aggregate_Lock_2.pgf}} } \bigskip \subfloat[AMD]{ \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Aggregate_Lock_4.pgf}} } \subfloat[Intel]{ \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Aggregate_Lock_4.pgf}} } \bigskip \subfloat[AMD]{ \resizebox{0.5\textwidth}{!}{\input{figures/nasus_Aggregate_Lock_8.pgf}} \label{f:mutex_bench8_AMD} } \subfloat[Intel]{ \resizebox{0.5\textwidth}{!}{\input{figures/pyke_Aggregate_Lock_8.pgf}} \label{f:mutex_bench8_Intel} } \caption{The aggregate lock benchmark comparing \CC \lstinline{scoped_lock} and \CFA mutex statement throughput (higher is better).} \label{f:mutex_bench} \end{figure} % Local Variables: % % tab-width: 4 % % End: %