Index: doc/theses/colby_parsons_MMAth/text/mutex_stmt.tex
===================================================================
--- doc/theses/colby_parsons_MMAth/text/mutex_stmt.tex	(revision 21d1c9cebd7300910374cba53168bf8e6e62870e)
+++ doc/theses/colby_parsons_MMAth/text/mutex_stmt.tex	(revision e0e2f027c39064abf351e5aa4863fee98bdb1c08)
@@ -5,6 +5,6 @@
 % ======================================================================
 
-The mutual exclusion problem was introduced by Dijkstra in 1965~\cite{Dijkstra65,Dijkstra65a}.
-There are several concurrent processes or threads that communicate by shared variables and from time to time need exclusive access to shared resources.
+The mutual exclusion problem was introduced by Dijkstra in 1965~\cite{Dijkstra65,Dijkstra65a}:
+there are several concurrent processes or threads that communicate by shared variables and from time to time need exclusive access to shared resources.
 A shared resource and code manipulating it form a pairing called a \Newterm{critical section (CS)}, which is a many-to-one relationship;
 \eg if multiple files are being written to by multiple threads, only the pairings of simultaneous writes to the same files are CSs.
@@ -27,10 +27,11 @@
 \section{Monitor}
 \CFA provides a high-level locking object, called a \Newterm{monitor}, an elegant, efficient, high-level mechanisms for mutual exclusion and synchronization for shared-memory systems.
-First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}, several concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, \uC~\cite{Buhr92a} and Java~\cite{Java}.
+First proposed by Brinch Hansen~\cite{Hansen73} and later described and extended by C.A.R.~Hoare~\cite{Hoare74}.
+Several concurrent programming languages provide monitors as an explicit language construct: \eg Concurrent Pascal~\cite{ConcurrentPascal}, Mesa~\cite{Mesa}, Turing~\cite{Turing:old}, Modula-3~\cite{Modula-3}, \uC~\cite{Buhr92a} and Java~\cite{Java}.
 In addition, operating-system kernels and device drivers have a monitor-like structure, although they often use lower-level primitives such as mutex locks or semaphores to manually implement a monitor.
 
 Figure~\ref{f:AtomicCounter} shows a \CFA and Java monitor implementing an atomic counter.
 A \Newterm{monitor} is a programming technique that implicitly binds mutual exclusion to static function scope by call and return.
-Lock mutual exclusion, defined by acquire/release calls, is independent of lexical context (analogous to block versus heap storage allocation).
+In contrast, lock mutual exclusion, defined by acquire/release calls, is independent of lexical context (analogous to block versus heap storage allocation).
 Restricting acquire and release points in a monitor eases programming, comprehension, and maintenance, at a slight cost in flexibility and efficiency.
 Ultimately, a monitor is implemented using a combination of basic locks and atomic instructions.
@@ -271,8 +272,9 @@
 The @scoped_lock@ uses a deadlock avoidance algorithm where all locks after the first are acquired using @try_lock@ and if any of the lock attempts fail, all acquired locks are released.
 This repeats after selecting a new starting point in a cyclic manner until all locks are acquired successfully.
-This deadlock avoidance algorithm is shown in Listing~\ref{l:cc_deadlock_avoid}.
+This deadlock avoidance algorithm is shown in Figure~\ref{f:cc_deadlock_avoid}.
 The algorithm is taken directly from the source code of the @<mutex>@ header, with some renaming and comments for clarity.
 
-\begin{cfa}[caption={\CC \lstinline{scoped_lock} deadlock avoidance algorithm},label={l:cc_deadlock_avoid}]
+\begin{figure}
+\begin{cfa}
 int first = 0;  // first lock to attempt to lock
 do {
@@ -291,6 +293,9 @@
 } while ( ! locks[first].owns_lock() );  $\C{// is first lock held?}$
 \end{cfa}
-
-While the algorithm in \ref{l:cc_deadlock_avoid} successfully avoids deadlock, there is a livelock scenario.
+\caption{\CC \lstinline{scoped_lock} deadlock avoidance algorithm}
+\label{f:cc_deadlock_avoid}
+\end{figure}
+
+While this algorithm successfully avoids deadlock, there is a livelock scenario.
 Assume two threads, $A$ and $B$, create a @scoped_lock@ accessing two locks, $L1$ and $L2$.
 A livelock can form as follows.
@@ -335,5 +340,5 @@
 \end{cquote}
 Comparatively, if the @scoped_lock@ is used and the same locks are acquired elsewhere, there is no concern of the @scoped_lock@ deadlocking, due to its avoidance scheme, but it may livelock.
-The convenience and safety of the @mutex@ statement, \eg guaranteed lock release with exceptions, should encourage programmers to always use it for locking, mitigating any deadlock scenario.
+The convenience and safety of the @mutex@ statement, \eg guaranteed lock release with exceptions, should encourage programmers to always use it for locking, mitigating any deadlock scenario versus combining manual locking with the mutex statement.
 
 \section{Performance}
@@ -383,7 +388,7 @@
 For example, on the AMD machine with 32 threads and 8 locks, the benchmarks would occasionally livelock indefinitely, with no threads making any progress for 3 hours before the experiment was terminated manually.
 It is likely that shorter bouts of livelock occurred in many of the experiments, which would explain large confidence intervals for some of the data points in the \CC data.
-In Figures~\ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel} the mutex statement performs better than the baseline.
-At 7 locks and above the mutex statement switches from a hard coded sort to insertion sort.
-It is likely that the improvement in throughput compared to baseline is due to the time spent in the insertion sort, which decreases contention on the locks.
+In Figures~\ref{f:mutex_bench8_AMD} and \ref{f:mutex_bench8_Intel} there is the counter-intuitive result of the mutex statement performing better than the baseline.
+At 7 locks and above the mutex statement switches from a hard coded sort to insertion sort, which should decrease performance.
+It is likely the increase in throughput compared to baseline is due to the delay spent in the insertion sort, which decreases contention on the locks.
 
 \begin{figure}
