Ignore:
Timestamp:
Sep 11, 2023, 12:55:43 PM (9 months ago)
Author:
caparsons <caparson@…>
Branches:
master
Children:
c8ec58e
Parents:
3ee8853
Message:

Incorporated changes in response to Trevor's comments.

File:
1 edited

Legend:

Unmodified
Added
Removed
  • doc/theses/colby_parsons_MMAth/text/mutex_stmt.tex

    r3ee8853 r9509d67a  
    8383\end{figure}
    8484
    85 Like Java, \CFA monitors have \Newterm{multi-acquire} semantics so the thread in the monitor may acquire it multiple times without deadlock, allowing recursion and calling of other MX functions.
     85Like Java, \CFA monitors have \Newterm{multi-acquire} (reentrant locking) semantics so the thread in the monitor may acquire it multiple times without deadlock, allowing recursion and calling of other MX functions.
    8686For robustness, \CFA monitors ensure the monitor lock is released regardless of how an acquiring function ends, normal or exceptional, and returning a shared variable is safe via copying before the lock is released.
    8787Monitor objects can be passed through multiple helper functions without acquiring mutual exclusion, until a designated function associated with the object is called.
     
    104104}
    105105\end{cfa}
    106 The \CFA monitor implementation ensures multi-lock acquisition is done in a deadlock-free manner regardless of the number of MX parameters and monitor arguments. It it important to note that \CFA monitors do not attempt to solve the nested monitor problem~\cite{Lister77}.
     106The \CFA monitor implementation ensures multi-lock acquisition is done in a deadlock-free manner regardless of the number of MX parameters and monitor arguments via resource ordering.
     107It it important to note that \CFA monitors do not attempt to solve the nested monitor problem~\cite{Lister77}.
    107108
    108109\section{\lstinline{mutex} statement}
     
    165166In detail, the mutex statement has a clause and statement block, similar to a conditional or loop statement.
    166167The clause accepts any number of lockable objects (like a \CFA MX function prototype), and locks them for the duration of the statement.
    167 The locks are acquired in a deadlock free manner and released regardless of how control-flow exits the statement.
     168The locks are acquired in a deadlock-free manner and released regardless of how control-flow exits the statement.
     169Note that this deadlock-freedom has some limitations \see{\VRef{s:DeadlockAvoidance}}.
    168170The mutex statement provides easy lock usage in the common case of lexically wrapping a CS.
    169171Examples of \CFA mutex statement are shown in \VRef[Listing]{l:cfa_mutex_ex}.
     
    210212Like Java, \CFA introduces a new statement rather than building from existing language features, although \CFA has sufficient language features to mimic \CC RAII locking.
    211213This syntactic choice makes MX explicit rather than implicit via object declarations.
    212 Hence, it is easier for programmers and language tools to identify MX points in a program, \eg scan for all @mutex@ parameters and statements in a body of code.
     214Hence, it is easy for programmers and language tools to identify MX points in a program, \eg scan for all @mutex@ parameters and statements in a body of code; similar scanning can be done with Java's @synchronized@.
    213215Furthermore, concurrent safety is provided across an entire program for the complex operation of acquiring multiple locks in a deadlock-free manner.
    214216Unlike Java, \CFA's mutex statement and \CC's @scoped_lock@ both use parametric polymorphism to allow user defined types to work with this feature.
     
    231233thread$\(_2\)$ : sout | "uvw" | "xyz";
    232234\end{cfa}
    233 any of the outputs can appear, included a segment fault due to I/O buffer corruption:
     235any of the outputs can appear:
    234236\begin{cquote}
    235237\small\tt
     
    260262mutex( sout ) { // acquire stream lock for sout for block duration
    261263        sout | "abc";
    262         mutex( sout ) sout | "uvw" | "xyz"; // OK because sout lock is recursive
     264        sout | "uvw" | "xyz";
    263265        sout | "def";
    264266} // implicitly release sout lock
    265267\end{cfa}
    266 The inner lock acquire is likely to occur through a function call that does a thread-safe print.
    267268
    268269\section{Deadlock Avoidance}\label{s:DeadlockAvoidance}
     
    309310For fewer than 7 locks ($2^3-1$), the sort is unrolled performing the minimum number of compare and swaps for the given number of locks;
    310311for 7 or more locks, insertion sort is used.
    311 Since it is extremely rare to hold more than 6 locks at a time, the algorithm is fast and executes in $O(1)$ time.
    312 Furthermore, lock addresses are unique across program execution, even for dynamically allocated locks, so the algorithm is safe across the entire program execution.
     312It is assumed to be rare to hold more than 6 locks at a time.
     313For 6 or fewer locks the algorithm is fast and executes in $O(1)$ time.
     314Furthermore, lock addresses are unique across program execution, even for dynamically allocated locks, so the algorithm is safe across the entire program execution, as long as lifetimes of objects are appropriately managed.
     315For example, deleting a lock and allocating another one could give the new lock the same address as the deleted one, however deleting a lock in use by another thread is a programming error irrespective of the usage of the @mutex@ statement.
    313316
    314317The downside to the sorting approach is that it is not fully compatible with manual usages of the same locks outside the @mutex@ statement, \ie the lock are acquired without using the @mutex@ statement.
     
    338341\end{cquote}
    339342Comparatively, if the @scoped_lock@ is used and the same locks are acquired elsewhere, there is no concern of the @scoped_lock@ deadlocking, due to its avoidance scheme, but it may livelock.
    340 The convenience and safety of the @mutex@ statement, \ie guaranteed lock release with exceptions, should encourage programmers to always use it for locking, mitigating any deadlock scenario versus combining manual locking with the mutex statement.
     343The convenience and safety of the @mutex@ statement, \ie guaranteed lock release with exceptions, should encourage programmers to always use it for locking, mitigating most deadlock scenarios versus combining manual locking with the mutex statement.
    341344Both \CC and the \CFA do not provide any deadlock guarantees for nested @scoped_lock@s or @mutex@ statements.
    342345To do so would require solving the nested monitor problem~\cite{Lister77}, which currently does not have any practical solutions.
     
    344347\section{Performance}
    345348Given the two multi-acquisition algorithms in \CC and \CFA, each with differing advantages and disadvantages, it interesting to compare their performance.
    346 Comparison with Java is not possible, since it only takes a single lock.
     349Comparison with Java was not conducted, since the synchronized statement only takes a single object and does not provide deadlock avoidance or prevention.
    347350
    348351The comparison starts with a baseline that acquires the locks directly without a mutex statement or @scoped_lock@ in a fixed ordering and then releases them.
     
    356359Each variation is run 11 times on 2, 4, 8, 16, 24, 32 cores and with 2, 4, and 8 locks being acquired.
    357360The median is calculated and is plotted alongside the 95\% confidence intervals for each point.
     361The confidence intervals are calculated using bootstrapping to avoid normality assumptions.
    358362
    359363\begin{figure}
     
    388392}
    389393\end{cfa}
    390 \caption{Deadlock avoidance benchmark pseudocode}
     394\caption{Deadlock avoidance benchmark \CFA pseudocode}
    391395\label{l:deadlock_avoid_pseudo}
    392396\end{figure}
     
    396400% sudo dmidecode -t system
    397401\item
    398 Supermicro AS--1123US--TR4 AMD EPYC 7662 64--core socket, hyper-threading $\times$ 2 sockets (256 processing units) 2.0 GHz, TSO memory model, running Linux v5.8.0--55--generic, gcc--10 compiler
     402Supermicro AS--1123US--TR4 AMD EPYC 7662 64--core socket, hyper-threading $\times$ 2 sockets (256 processing units), TSO memory model, running Linux v5.8.0--55--generic, gcc--10 compiler
    399403\item
    400 Supermicro SYS--6029U--TR4 Intel Xeon Gold 5220R 24--core socket, hyper-threading $\times$ 2 sockets (48 processing units) 2.2GHz, TSO memory model, running Linux v5.8.0--59--generic, gcc--10 compiler
     404Supermicro SYS--6029U--TR4 Intel Xeon Gold 5220R 24--core socket, hyper-threading $\times$ 2 sockets (96 processing units), TSO memory model, running Linux v5.8.0--59--generic, gcc--10 compiler
    401405\end{list}
    402406%The hardware architectures are different in threading (multithreading vs hyper), cache structure (MESI or MESIF), NUMA layout (QPI vs HyperTransport), memory model (TSO vs WO), and energy/thermal mechanisms (turbo-boost).
     
    411415For example, on the AMD machine with 32 threads and 8 locks, the benchmarks would occasionally livelock indefinitely, with no threads making any progress for 3 hours before the experiment was terminated manually.
    412416It is likely that shorter bouts of livelock occurred in many of the experiments, which would explain large confidence intervals for some of the data points in the \CC data.
    413 In Figures~\ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel} there is the counter-intuitive result of the mutex statement performing better than the baseline.
     417In Figures~\ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel} there is the counter-intuitive result of the @mutex@ statement performing better than the baseline.
    414418At 7 locks and above the mutex statement switches from a hard coded sort to insertion sort, which should decrease performance.
    415419The hard coded sort is branch-free and constant-time and was verified to be faster than insertion sort for 6 or fewer locks.
    416 It is likely the increase in throughput compared to baseline is due to the delay spent in the insertion sort, which decreases contention on the locks.
    417 
     420Part of the difference in throughput compared to baseline is due to the delay spent in the insertion sort, which decreases contention on the locks.
     421This was verified to be part of the difference in throughput by experimenting with varying NCS delays in the baseline; however it only comprises a small portion of difference.
     422It is possible that the baseline is slowed down or the @mutex@ is sped up by other factors that are not easily identifiable.
    418423
    419424\begin{figure}
Note: See TracChangeset for help on using the changeset viewer.